Printer Friendly

Development and preliminary psychometric properties of the Transition Competence Battery for Deaf Adolescents and Young Adults.

Developing vocational and independent living skills is critical to the ultimate work and day-to-day success of people with hearing impairments. It is crucial that reliable and valid assessment data be gathered to guide and structure focused and effective instructional programs in these areas (DeStefano, 1987; Frey, 1984; Marut & Innes, 1986; Shiels, 1986; Sligar, 1983). Unfortunately, few instruments are designed specifically to assess the transition skills of adolescents and young adults with hearing impairments (Reiman & Bullis, 1987). Typical practice is to administer traditional psychometric tests (e.g., IQ tests) or functional measures designed for other populations (e.g., vocational skill tests developed for adolescents with learning disabilities). People who administer such measures (interpreters or clinicians) possess varying levels of sign language competence (Levine, 1974; Stewart, 1986). Based on the lack of other assessment alternatives, this approach is logical, but the veracity of data gathered in this manner--and subsequent intervention decisions based on these results--are questionable for two primary reasons.

First, deafness is a condition defined by its unique expressive and receptive communication modalities that differ significantly from those of our English-based hearing society. Many deaf people use a bona fide language (American Sign Language or ASL) that has no structural relationship to English; that relies on visual rather than auditory encoding and decoding; and that has a rule-governed phonology, syntax, and morphology (Reiman & Bullis, 1989). Educators conducting any assessment of deaf people's transition skills should consider this fundamental communicative difference. Can the deaf person use an interpreter in an effective manner in a job interview? Does the individual know his or her legal rights when interacting with a police officer? Can the person formulate a strategy to communicate effectively with coworkers? Questions such as these are highly relevant to successful work and living experiences. A review of published research on measurement procedures with this population, however, reveals that tests designed for other populations do not address these crucial skills in any systematic way, nor have investigations delineated the particular skills and content necessary for deaf people to succeed in work and living endeavors in the community (Bullis & Reiman, 1989; Reiman & Bullis, 1987).

Second, any time the administration procedures of a standardized assessment tool are altered, the validity of the resulting assessment data must be questioned. For example, consider a measure of functional skill knowledge that was devised for use with and standardized on a group of adolescents other than persons who are deaf (e.g., learning disabled). If that tool is administered using sign communication in place of verbal instructions, such substitution violates the standardization procedures of the measure and technically invalidates the tool (American Psychological Association, 1985; Gerweck & Ysseldyke, 1979; Yoshida & Friedman, 1986). Consequently, the resulting data are suspect because of the absence of psychometric standards of the measure for that type of application.

There is, then, a pressing need to develop language-appropriate, content-relevant, and psychometrically sound measures of transition skills for deaf persons. The purpose of this article is to describe the development and initial standardization data of such an instrument, the Transition Competence Battery for Deaf Adolescents and Young Adults (TCB) (Reiman & Bullis, 1990).

PRELIMINARY DEVELOPMENT PROCEDURES

Three fundamental assumptions guided the development of this test battery. First, it was, and is, our belief that one of the major stumbling blocks in conducting research or understanding investigations of this group, is that all too often the deaf population is regarded as homogeneous. Quite the contrary, the deaf population is highly heterogeneous, encompassing people with varying levels of auditory capabilities, linguistic skills, cognitive abilities, social skills, and emotional development. Consequently, the construction of an assessment battery, which is content relevant for a particular subgroup, requires the clear delineation of the segment of the population for which the instrument is to be used. Based on results of previous research (Bullis, 1985), the population for whom this instrument can be constructed can be described in the following way. Note that this description parallels the "low functioning" term that is often used to characterize a group of deaf people who do not attend 4-year colleges or succeed in work or living endeavors in the community, and for whom few services and little research are available (Bowe, 1988).

Members of the subject population are devoid of a seriously complicating secondary disability, but some may present mild secondary conditions (e.g., wear glasses, heart murmur, etc.). Further, members of the subject population possess limited English reading skills (e.g., read at approximately the 3rd-grade level). The subject population does not include persons who go on to a 4-year college or university, but may include persons who attend community college or vocational/technical training centers. More than likely, members of the sample are people who either drop out of high school, seek employment immediately after leaving high school, or go on to some type of rehabilitation or community-based training program. Finally, members of the subject population have little experience or training in employment and independent living skills.

Second, although multiple measures and perspectives should be employed in the assessment of any deaf person, this process should begin with an examination of the individual's knowledge of requisite transition skills to work and live successfully in the community. Knowledge of how to behave is a necessary foundation of behavior (Bandura, 1977), and studies suggest that knowledge of functional skills is correlated to actual skill performance for people with mild cognitive impairments (Bullis & Foss, 1986; Landman, Irvin, & Halpern, 1980).

Third, if assessment is to be connected to transition instruction or training, it is critical that the measurement tools represent the content of the particular domain of concern. That is, measures of functional skills should be composed of items that adequately sample the knowledge and skills necessary for the deaf person to succeed in his or her transition from the school to the community. It follows that the parameters of the transition domain for the target group of deaf persons must be clear. Despite some controversy (Clark & Knowlton, 1988; Rusch & Menchetti, 1988), we believe that transition is best represented by two broad domains of community-based out-comes: employment and independent living skills (Bullis, Bull, Johnson, Johnson, & Kittrell, 1990).

Given these assumptions, we adopted a domain sampling model of test construction (Nunnally, 1978). In this approach, we determined the content definitions of the employment and independent living domains for the target population. We then used these boundaries as a blueprint for test construction and generated test items within these boundaries. As the final part of the development process, we developed and tested a prototype form of the measure. The following sections describe each of these steps.

Specification of the Test Battery's Content Blueprint

There have been extensive writings on employment and independent living for deaf persons, but relatively little research exists on the specific skills comprising these areas. Comprehensive reviews of literature in this field since 1975, on both transition and assessment (Bull, Bullis, & Sendelbaugh, 1987; Bullis, Bull, Sendelbaugh, & Freeburg, 1987; Reiman & Bullis, 1987), reveal only a general description of the particular problems, concerns, and issues facing deaf people in transitions from school to community, or in community adjustment. Accordingly, it was necessary to first identify the particular skills necessary for a deaf person to successfully work and live in the community. We conducted a two-step procedure to establish such a skill taxonomy (Bullis & Reiman, 1989).

First, we held a workshop with 18 professionals from the Northwest in the field of deafness to identify critical work and independent living skills for the target population. The workshop employed the Nominal Group Technique (NGT) (Delbecq, van de Ven, & Gustafson, 1975), a structured group interaction method, to answer two questions.

1. What are the five most important employment-related skills for a member of the target population?

2. What are the five most important independent living-related skills for a member of the target population?

We grouped the lists of skills generated by workshop participants into three employment-related subdomains (job-seeking skills, work adjustment skills, and job-related social/interpersonal skills) and three subdomains related to independent living (money management, health and home, and community awareness).

Second, these skill areas were evaluated in a national survey of practitioners and leading "experts" in the field of deafness, a technique designed to ensure broad geographical and social validity of the skills (Kazdin, 1977). Respondents rated each skill on two 4-point Likert scales in terms of importance (its importance to the ultimate employment or independent living success of members of the target population) and on presence (the percentage of persons in the target population possessing the competency).

A total of 307 deaf and hearing service providers (representing residential and mainstream schools, community colleges, and rehabilitation programs) completed the survey. We conducted analyses with this data set to identify the most critical transition competencies, from the perspective of professional opinion, for an individual in the target population to possess in order to succeed working and living independently. To complement and modify the content definition drawn from both the NGT and the national survey, we used the literature reviews to clarify, expand, and condense this listing. On the basis of this empirical and conceptual examination, we generated a working content blueprint for the TCB. Table 1 shows the six subdomains and their associated content areas for the employment domain; Table 2 shows the subdomains and content areas for the independent living domain. These six subdomains of transition for this part of the deaf population eventually became the six subtests of the TCB. The content areas provided the framework used to structure the item-generation activities.

Generation of Test Items

We assembled a second group of 20 professional service providers (hearing and deaf) for training on test item construction. We decided that persons with direct experience with the target population in work or independent living programs, and who possessed a working knowledge of sign communication (ASL, Pidgin Sign English, and Manually Coded English) would be able to write content-relevant and understandable test questions. The goal of the workshop was to teach the requisite skills for writing test questions in each of the previously identified content areas. Participants were paid expenses and a fee for attending; after leaving, they were paid a fee upon satisfactory completion of an item writing task. That is, each of the participants agreed to write a specified number of test items in each of the sub-domains and their respective content areas. [TABULAR DATA OMITTED]

In the workshop, we placed particular emphasis on defining those categories in which TCB items would be written--the knowledge, comprehension, and application categories (Bloom, 1956)--in the form of 3-option, multiple-choice questions. The format was chosen to minimize correct answers due to guessing (Nunnally, 1978) and because 3-option multiple-choice questions have been demonstrated to be valid for use with adolescents and young adults with mild cognitive disabilities (Bullis & Foss, 1986; Landman et al., 1980). Workshop participants discussed measurement studies relative to the deaf target group, based on an earlier review of empirical studies (Reiman & Bullis, 1987); then they listened to a presentation detailing rules for construction of multiple-choice items (Gronlund, 1977). The presenters identified and cautioned participants about the use of certain linguistic structures (i.e., conditionals, minimal information pronouns, comparatives, negatives, etc.) of potential difficulty for the target population. Then the presenters introduced and reviewed lists of words commensurate with a 3rd-grade reading level (the TCB target level). Finally, the workshop included highly structured practice time for item writing, and all participants wrote and critiqued practice sets of questions. [TABULAR DATA OMITTED]

Six weeks after the workshop, participants submitted more than 900 test items that we subsequently edited for content, duplication, and adherence to item-construction criteria. Editing involved eliminating or significantly modifying inappropriate or inaccurate items and distractors. At the conclusion of the editing process, we ended up with slightly more than 200 items that were distributed across the six content sub-domains and that addressed each of the content areas. Figure 1 presents two examples from the item pool.

Pilot Test of a Written and Signed (Video) Format

To truly measure the subject population's knowledge of TCB content--and not merely reflect their English language capabilities--a combined written and signed administration approach seemed appropriate. Because few guidelines exist on which to base such a tool, we carefully examined the veracity of this administration approach. As a place to start, we decided that a small-group (n=6 to 8) procedure that employed videotaped, signed directions, coupled with a simply worded and illustrated test booklet, would be most expedient and cost effective. The idea was that a group of deaf persons could be shown the questions and their respective responses on the monitor and read the question in the test manual. On an individual basis, they would mark the correct answer on a separate answer sheet. After a prespecified length of time, the entire group would then be administered the next question.

From the edited item pool, we selected a subset of 30 test items. These items were representative of multiple presentation styles (e.g., positioning and size of signer and character generation) and varying levels of reading complexity. (Note: Some items required students to read actual bus schedules or recipes because these are skills required in the "real world." In these instances, the reading level of the items was not limited to the 3rd-grade level.) We randomly positioned correct answers and distractors for the 30 items, and we developed written materials, including a test booklet and separate answer sheet. We produced a structured process-evaluation form to systematically query subjects regarding their understanding and subjective experience of both the written and signed pilot instrument.

Next, we developed a videotaped version of the 30 test items in American Sign Language (ASL). A certified interpreter signed the question stem and the three possible responses. For many items, the salient information contained in the stem or the responses was reproduced using character generation, which appeared simultaneously with, and just to the left of, the signer.

We administered the pilot test to 36 hearing-impaired subjects located in three sites: main-stream high school juniors and seniors, n = 8; residential high school seniors, n = 16; and community college deaf program students, n = 12. The item difficulty (i.e., the percentage of the subjects who answered an item correctly) for the pilot test items ranged from 13.9% to 97.2%, with an average item difficulty of 64.49%. The internal consistency reliability index for the entire measure was .70. Because the items comprising the pilot test were drawn across the six content areas of the TCB, the internal consistency reliability index is lower than to be expected in a test of homogeneous content.

Following each administration of the pilot test in the various sites, we conducted a process evaluation. Subjects responded, through a facilitated group discussion format, to questions relating to both the content and administration procedures of the instrument. Overall, these comments were positive regarding the relevance of the test content and the level at which it was presented, but two very important points were apparent across groups. First, the number of subjects reporting difficulty in understanding ASL raised serious questions regarding the viability of this language as the choice for the TCB. Second, subjects reported annoyance with the videotape's moving too slowly. Both the length of the countdown and the response time between items were distressing for some students who, having answered the question, were ready to move to the next question without a prolonged waiting period. A clearly visible result of this situation was restlessness and potentially disruptive behavior on the part of the early finishers. We carefully considered each of these issues in developing the next version of the TCB.

Development of the TCB

From the results of the pilot test, we decided that the videotaped, signed version of the TCB should be presented in Pidgin Signed English (PSE) (sign communication using primarily English word order with ASL signs and ASL grammatical features). The reported difficulties with the slow-moving videotape necessitated a reexamination of the timing patterns within and between test items. At issue was the need to allot increments of response time that would be reasonable given the heterogenous cognitive, linguistic, and intellectual abilities of each group of six to eight subjects being tested. Based on staff observations during the pilot testing of the actual time subjects used to respond to each item, we adjusted the timing patterns of the videotaped form downward and gauged the patterns to respond to the middle on a continuum of subjects' demonstrated needs. Finally, we maintained the group-administered, 3-option multiple-choice format for the test battery.

All subtests were produced in a professional recording studio using a certified sign language interpreter and extensive use of character generation on the monitor to highlight key aspects of the test questions and their responses. The items in each subtest were randomly assigned a position, and each item's responses (the correct response and the two distractors) were randomly assigned to be either "a," "b," or "c." The six subtests are listed below with the amount of time required to administer each and their respective number of items.

* Subtest 1: Job-Seeking Skills for Employment (54 min:28 s; 38 items)

* Subtest 2: Work Adjustment Skills for Employment (40:45;32)

* Subtest 3: Job-Related Social/Interpersonal Skills for Employment (34:38;27)

* Subtest 4: Money Management Skills for Independent Living (36:30;23)

* Subtest 5: Health and Home Skills for Independent Living (42:39;33)

* Subtest 6: Community Awareness Skills for Independent Living (37:34;28)

STANDARDIZATION OF THE TCB

The standardization of the TCB was complicated by a practical issue. To administer the entire test battery required 2 to 3 days of testing for 2 to 3 hr per day. Such a time commitment for school personnel, coupled with diversion of student time from classes, represented a very demanding commitment. Consequently, some sites agreed to administer only the employment tests or only the independent living tests. Thus the sample size on which analyses were conducted varied somewhat across subtests. Between 181 and 230 subjects, representing 14 different sites across the United States, participated in the standardization of the test battery. The majority of the deaf persons who took the subtests were male (56% to 58%), were from residential schools (53% to 74%), were deafened prelingually (before age 3) (79% to 84%), and were between 18 and 19 years of age at the time of testing (means 18.69 to 19.07). Data were analyzed through the reliability and item analysis programs of the Statistical Package for the Social Sciences (SPSS Inc., 1989). The rest of this section provides an overview of the preliminary psychometric characteristics of the TCB. [TABULAR DATA OMITTED]

Item Statistics

Two different kinds of item statistics were computed for each subtest: item difficulty, or the percentage of subjects who answered each item correctly; and point-biserial or item-total correlations that indicate the relationship of each subtest item to the total subtest. As a statistical guideline for retention in the TCB subtests, items were to possess a point-biserial correlation of at least .2 and a moderate level of difficulty (between .4 and .8). Also, we decided that to be retained, each item should be conceptually appropriate. Thus, both empirical and logical criteria were used in making the final content decisions for the subtests (Nunnally, 1978).

By applying these standards, we deleted a total of 18 items from the subtests, as follows: from Subtest 1, 5 items; Subtest 2, 1; Subtest 3, 1; Subtest 4, 3; Subtest 5, 4; and Subtest 6, 4. These decisions also reduced the amount of time required to administer each subtest to the following levels.

* Subtest 1: Job-Seeking Skills for Employment (48 min: 13 s; 33 items)

* Subtest 2: Work Adjustment Skills for Employment (39:48; 31)

* Subtest 3: Job-Related Social/Interpersonal Skills for Employment (33:35; 26)

* Subtest 4: Money Management Skills for Independent Living (32:27; 20)

* Subtest 5: Health and Home Skills for Independent Living (38:34; 29)

* Subtest 6: Community Awareness Skills for Independent Living (33:12; 24)

The third column of Table 3 presents the average item difficulty of each of the subtests, and the fourth column presents the average item-total, or point biserial, correlations for each subtest. The average p values are from .507 to .725, and the average point-biserial correlations ranged from .247 to .404, values consistent with the previously set psychometric criteria.

Subtest Characteristics

The average score of each subtest and its standard deviation and the average percentage of the questions answered correctly for each subtest are shown in columns 3 and 4 of Table 3. The standardization subjects tended to score lower on Subtests 4 and 5, at least in relation to the other four subtests. There are two possible explanations for the group's lower performance on these two subtests. First, these subtests focus on issues related to living and functioning independently in the community. It is well known that deaf adolescents are afforded career and vocational preparation in school settings (Ouellette & Dwyer, 1985), but independent living skill training is a less developed instructional option (Ouellette & Loyd, 1980). Second, in reviewing the subtest items, it is clear that some problems required the student to read and understand a practical math or personal care skill (e.g., calculate the savings when there is 15% off an item's cost, read a prescription and indicate how often the medication should be taken) and required a higher degree of English reading competence. Although we did not--and do not want--to assess deaf persons' reading levels, we did want to measure their functional academic survival skills, which required that items reflect actual math and reading tasks they would face in community settings. [TABULAR DATA OMITTED]

Reliability

Two types of reliability indexes were computed for each of the TCB subtests. Coefficient alpha internal consistency reliability indexes are presented in the first column of Table 4. This reliability coefficient is regarded as a measure of the test's content homogeneity, or the way the test items interrelate to one another. Five of the six subtests possess indexes above .75, a level generally regarded as acceptable for group tests of this type (Salvia & Ysseldyke, 1988). Subtest 4, on money management, exhibits a lower internal consistency reliability than the others, a result that is most probably due to its shorter length (20 items) and the lack of variability of its score distribution (Nunnally, 1978); that is, the majority of the persons taking the subtest tended to score low, thus skewing the distribution of scores.

We also conducted a study of each subtest's test-retest reliability. Specifically, deaf students representing programs who were willing to participate in this study, took either the three employment subtests or the three independent living subtests at one time. The same subjects took the same subtests again 2 to 4 weeks later, and the two scores were then correlated. Sixteen deaf persons representing mainstream and community college programs (9 males and 7 females, 13 of whom were deafened prelingually, with an average age of 20.387) were involved in the test-retest study of the employment subtests. Twenty-eight deaf students from residential school programs (15 males and 13 females, 25 of whom were deafened prelingually, with an average age of 17.283) participated in the test-retest of the independent living subtests. The second column of Table 4 provides the test-retest reliability indexes calculated for each subtest. Again, Subtest 4 exhibited the lowest reliability index, probably for the reasons listed previously.

Validity

In contrast to reliability, validity is a test property that must be established over time through repeated studies and in various ways (Messick, 1989; Nunnally, 1978). In this project, we were able to address only the content and construct validity of the TCB. [TABULAR DATA OMITTED]

Nunnally (1978) stated that the content validity of a measure is demonstrated best through the procedures followed in its development. That is, the steps followed in identifying and sampling dictate in large part whether or not the test adequately samples content from the domain of concern. Given the extensive procedures we followed to develop a content matrix across the employment and independent living domains and to generate content-relevant test items, it can be judged that the content of the TCB and its six subtests is valid.

The construct validity of assessment instruments is a very complex psychometric property. Recently, experts in the field of measurement (Kerlinger, 1986; Messick, 1989; Nunnally, 1978) have taken the position that construct validity is the most important type of validity for a test to possess. Essentially, construct validity is demonstrated by the way the assessment instrument or instruments correlate to a theoretical model of the relevant construct that is being measured. Table 5 shows the intercorrelations of the TCB subtests and their correlations with pertinent demographic variables for the group of subjects (n = 158) that took the entire test battery. Statistically significant, but weak correlations are exhibited among gender, type of school program (mainstream vs. residential), age, and subtest performance. A pattern of low, negative correlations is apparent describing the relationship between pre- and postlingual hearing loss (hearing loss before age 3 as compared to hearing loss after age 3) and subtest performance. These indexes are not strong, but do suggest that successful performance on the TCB is predicated somewhat on early exposure, and assumed greater proficiency, in English. These results support the notion that certain demographic variables were not pertinent to test performance, but that experience with English was related to test performance--at least to some degree.

A second method to provide initial evidence of the TCB's construct validity used a social comparison approach (Bellack & Hersen, 1988; Bolton, 1987; Kerlinger, 1986; Nunnally, 1978; Wiggins, 1973). In this technique, the researcher theorizes that a subject group for whom the particular measure was not constructed will perform in a very different way on the test than a subject group for whom the measure was designed. Results confirming the hypothesis provides evidence supporting the measure's construct validity.

To establish such a criterion group for our purposes, we contacted the deaf student organization at Western Oregon State College to recruit 13 deaf undergraduate students to participate in this investigation. We hypothesized that this group would score differently, that is, higher, on each of the six TCB subtests than would members of the target population because of abundant enculturation experiences, higher English language performance, and more experience in work and independent living endeavors. [TABULAR DATA OMITTED]

To compare the college group's performance on the TCB with that of the target population, two groups of residential subjects and two groups of mainstream subjects (each consisting of 13 subjects) were randomly selected from the standardization pool of subjects. Two different sets of planned orthogonal contrasts (Klockars & Sax, 1986) were conducted between the college group's scores on each of the subtests and the scores of the other groups. In this type of analysis, the comparisons among groups are planned in advance by the researcher and conducted according to weightings established to examine pertinent, meaningful differences among groups. To achieve orthogonality (independence) of the comparison, the weightings must sum to 0 (Keppel, 1982). The two sets of contrasts used in this study are shown in Table 6. For each of these sets of comparisons, the effect size, or magnitude, of the resulting statistic (Cohen, 1988) was computed. The effect size was calculated as the mean of one group minus the mean of the other group divided by the mean of the standard deviations for both groups.

Contrast Set 1 examined the difference in performance between the college students and the average subtest scores of the two residential groups and the average scores of the two mainstream groups. Because we were interested only in differences that would favor the college group, we adopted a directional hypothesis. Specifically, the hypothesis was that the college group's average test score on each subtest would not be greater than that of the average of either the residential or mainstreamed groups. To control for Type I errors, a correction for the alpha level used in each set of comparisons was employed (Keppel, 1982). Specifically, when a number or "family" of comparisons pertaining to a certain question are made, the alpha level adopted for the set of comparisons is called the "familywise" error rate. The alpha level is then divided by the number of comparisons to establish the per-comparison alpha level. For this first set of contrasts a familywise error rate of .05 was chosen, so the per-comparison alpha level for the two contrasts was then set at .025 (.05/2). These results are shown in Table 7.

Contrast Set 2 examined the difference between the college group's scores on each of the subtests as compared to each of the four randomly constructed groups. Again, we were concerned only with score differences that would favor the college group, so again a directional hypothesis was tested. That is, the college students' average performance on each subtest would not be greater than that achieved by that of each of the other group of subjects. A familywise error rate, or alpha level, of .05 was chosen. Because four planned comparisons were conducted for each subtest, the per-comparison alpha level was apportioned at .0125 (.05/4). These results are shown in Table 8.

For both contrast sets, highly statistically significant differences were found favoring the college students' performance on each of the subtests. Also, calculation of the effect size yielded results that would be considered as "large" differences (i.e., effect size greater than .80 between pairs of groups on a particular measure, [Cohen 1988]). Taken together, these results provide preliminary evidence on the construct validity of the TCB. But because the sample size for this study was relatively small, caution should be exercised in regarding these results as definitive. [TABULAR DATA OMITTED]

DISCUSSION

The TCB is the first test battery of its type developed specifically for and standardized on a deaf population. The development of the battery addressed novel logistical issues, but the instrument appears content relevant for the target population; and overall the TCB demonstrates acceptable initial psychometric properties.

At the same time, there are several issues that demand further investigation. First, is the 3-option multiple-choice approach correct? Is the group administration method appropriate? The choice of these procedures was made based on the sparse data on assessment practices with deaf persons and our own best guesses from studies with other subject populations. Therefore, research should be conducted to verify these choices. Second, we have found that the videotape medium is, at times, cumbersome and slow, and that development of alternative administration media (e.g., videodisc) should be investigated. Third, Subtest 4, on money management, exhibited lower performance indexes and reliability coefficients than what is required for a group screening measure. Clearly, more work is necessary to revise this subtest. Finally, what is the relationship of skill knowledge to actual performance in real-life settings? We assume that knowledge is the building block of behavior, but at this time there are no studies that describe the interrelationship of knowledge and actual behavior for this segment of the deaf population. [TABULAR DATA OMITTED]

To conclude, this initial examination of the TCB is encouraging regarding its use and continued development. Such research will undoubtedly lead to its revision and strengthening and will increase our understanding of the assessment process for adolescents and young adults who are deaf. Moreover, we hope that investigations of this type will improve service delivery efforts, contributing to the ultimate work and living success of people who are deaf.

REFERENCES

American Psychological Association. (1985). Standards for educational and psychological testing. Washington, DC: Author.

Bandura, A. (1977). Social learning theory. Englewood Cliffs, NJ: Prentice-Hall.

Bellack, A., & Hersen, M. (1988). Behavioral assessment. New York: Pergamon Press.

Bloom, B. S. (1956). Taxonomy of educational objectives, handbook I: Cognitive domain. New York: McKay.

Bolton, B. (Ed.). (1987). Handbook of measurement and evaluation in rehabilitation. Baltimore: Paul H. Brookes.

Bowe, F. (1988). Toward equality: Education of the deaf. Washington, DC: U.S. Government Printing Office.

Bullis, M. (1985). A dilemma: Who and what to teach in career education programs? In M. Bullis & D. Watson (Eds.), Career education for hearing impaired students: A review (pp. 55-75). Little Rock, AR: Research and Training Center on Deafness.

Bullis, M., Bull, B., Johnson, B., Johnson, P., & Kittrell, G. (1990). School-to-community transition experiences of hearing impaired adolescents and young adults in the Northwest. Monmouth, OR: Teaching Research Division.

Bullis, M., Bull, B., Sendelbaugh, J., & Freeburg, J. (1987). The school to community transition of adolescents and young adults with deafness. Washington, DC: The Catholic University, National Rehabilitation Information Center.

Bullis, M., & Foss, G. (1986). Assessing the employment-related interpersonal competence of mildly mentally retarded workers. American Journal of Mental Deficiency, 91, 433-450.

Bullis, M., & Reiman, J. (1989). Survey of professional opinion on critical transition skills for deaf adolescents and young adults. Rehabilitation Counseling Bulletin, 32, 231-242.

Clark, G., & Knowlton, H. E. (1988). A closer look at transition issues for the 1990s: A response to Rusch and Menchetti. Exceptional Children, 54, 365-368.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.

Delbecq, A., van de Ven, A., & Gustafson, D. (1975). Group techniques for program planning. Glenview, IL: Scott, Foresman.

DeStefano, L. (1987). The use of standardized assessment in supported employment. In L. DeStefano & F. Rusch (Eds.), Supported employment in Illinois: Assessment methodology and research issues (pp. 55-98). Champaign, IL: Transition Institute.

Frey, W.(1984). Functional assessment in the '80s: A conceptual enigma, a technical challenge. In A. Halpern & M. Fuhrer (Eds.), Functional assessment in rehabilitation (pp. 11-43). Baltimore: Paul H. Brookes.

Gerweck, S., & Ysseldyke, J. (1979). Limitations of current psychological practices for the intellectual assessment of the hearing impaired: A response to the Levine study. Volta Review, 77 243-248.

Gronlund, N. E. (1977). Constructing achievement tests (p. 25). Englewood Cliffs, NJ: Prentice-Hall.

Kazdin, A. (1977). Assessing the clinical or applied importance of behavior change through social validation. Behavior Modification, 4, 427-452.

Keppel, G. (1982). Design and analysis: A researcher's handbook (2nd ed.). Englewood Cliffs, NJ: Prentice-Hall.

Kerlinger, F. N. (1986). Foundations of behavioral research (3rd ed.). New York: Holt, Rinehart and Winston.

Klockars, A., & Sax, G. (1986). Multiple comparisons. Beverly Hills, CA: Sage.

Landman, J., Irvin, L., & Halpern, A. (1980). Measuring life-skills of adolescents. Measurement in Evaluation and Guidance, 13, 95-106.

Levine, E. (1974). Psychological tests and practices with the deaf: A survey of the state of the art. Volta Review, 76, 298-319.

Marut, P., & Innes, C. (1986). The delivery of vocational evaluation and adjustment services to deaf people. In D. Watson, G. Anderson, & M. Taff-Watson (Eds.), Integrating human resources, technology, and systems in deafness (pp. 135-144). Silver Spring, MD: American Deafness and Rehabilitation Association.

Messick, S. (1989). Validity. In R. Linn (Ed.), Educational measurement (3rd ed., pp. 13-104). New York: Macmillan.

Nunnally, J. (1978). Psychometric methods (2nd ed.). New York: McGraw-Hill.

Ouellette, S., & Dwyer, C. (1985). A current profile of career education programs. In M. Bullis & D. Watson (Eds.), Career education of hearing impaired students: A review (pp. 27-54). Little Rock, AR: Research and Training Center on Deafness.

Ouellette, S., & Loyd, G. (1980). Independent living skills for severely handicapped deaf people. Silver Spring, MD: American Deafness and Rehabilitation Association.

Reiman, J., & Bullis, M. (1987). Research on measurement procedures for persons with hearing impairments: An annotated bibliography. Monmouth, OR: Teaching Research Division.

Reiman, J., & Bullis, M. (1989). Integrating students with deafness into mainstream public education. In R. Gaylord-Ross (Ed.), Integration strategies for students with handicaps (pp. 105-128). Baltimore: Paul H. Brookes.

Reiman, J., & Bullis, M. (1990). The transition competence at tery for deaf adolescents and young adults. Monmouth, OR: Teaching Research Division.

Rusch, F., & Menchetti, B. (1988). Transition in the 1990s: A reply to Knowlton and Clark. Exceptional Children, 54, 363-364.

Salvia, J., & Ysseldyke, J. (1988). Assessment in special and remedial education (4th ed.). Boston: Houghton Mifflin.

Shiels, J. (1986). Vocational assessment. In L. Stewart (Ed.), Clinical rehabilitation assessment and hearing impairment (pp. 95-110). Washington, DC: National Association of the Deaf.

Sligar, S. (1983). Commercial vocational evaluation systems and deaf persons. In D. Watson, G. Anderson, P. Marut, S. Ouellette, & N. Ford (Eds.), Vocational evaluation of hearing impaired persons: Research and practice (pp. 35-56). Little Rock, AR: Rehabilitation Research and Training Center in Deafness and Hearing Impairment.

SPSS Inc. (1989). SPSS-X user's guide (3rd ed.) Chicago: Author.

Stewart, L. (1986). Clinical rehabilitation assessment and hearing impairment: A guide to quality assurance: Washington, DC: National Association of the Deaf.

Wiggins, J. (1973). Personality and prediction: Principles and personality assessment. Reading, MA: Addison-Wesley.

Yoshida, R., & Friedman, D. (1986). Standards for educational and psychological testing: More than a symbolic exercise. In R. Bennett & C. Maher (Eds.), Emerging perspectives on assessment of exceptional children (pp. 187-193). New York: Haworth Press.
COPYRIGHT 1992 Council for Exceptional Children
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1992 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Bullis, Michael; Reiman, John
Publication:Exceptional Children
Date:Sep 1, 1992
Words:6111
Previous Article:How prepared are our teachers for mainstreamed classroom settings? A survey of postsecondary schools of education in New York State.
Next Article:Making sense of disability: low-income, Puerto Rican parents' theories of the problem.
Topics:


Related Articles
Transition Goals for Adolescents with Learning Disabilities.
Young adults who are hearing and deaf in a transition study: did they and their parents supply similar data?
Structure and videodisc adaptation of the Transition Competence Battery (TCB) for deaf adolescents and young adults.
Functional Assessment Services for Transition, Education, and Rehabilitation: Project FASTER.
Synchrony in adolescence.
Collaboration makes community integration work: a transition success story.
MATERNAL AND PATERNAL PARENTING DURING ADOLESCENCE: FORECASTING EARLY ADULT PSYCHOSOCIAL ADJUSTMENT.
TEENS Care About their Care.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters