Printer Friendly

Development of the Teacher Assessment of Student Communicative Competence (TASCC) for Grades 1 Through 5.

With increased calls for documenting the educational impact of communication deficits and treatment outcomes, the need for systematic measures of students' communicative effectiveness in the classroom has never been greater. Teachers are in an excellent position to observe and measure their students' verbal and nonverbal communicative abilities and use of compensatory strategies; thus, speech--language pathologists in schools may benefit from a use of teacher rating scales. This article describes the initial development and possible applications of such a measure, the Teacher Assessment of Student Communicative Competence (TASCC), for use with students in the first through fifth grades.

The need to gain information about children's communicative competence is fueled not only by the desire of speech--language pathologists (SLP) to understand the strengths and challenges of each child they serve but also by demands from employers, fellow professionals, and consumers for increased accountability. Historically, clinicians have often been content to examine immediate measures of behavioral change in making treatment decisions, followed by repeated testing using standardized tests to substantiate major decisions about initiating or terminating treatment (McCauley & Swisher, 1984). In addition, they have looked to the small body of treatment efficacy research reported in professional journals as further substantiation of the quality of their services. Increasingly, however, speech--language pathologists are being called upon to examine treatment outcomes at several levels (Frattali, 1998), including outcomes affecting an individual's social and educational functioning.

Attention to the functional implications of communication disorders has been advanced through the application of the 1980 International Classification of Impairments, Disabilities, and Handicaps (ICIDH, World Health Organization, 1980) to communication disorders (e.g., Curlee, 1993; Goldstein & Geirut, 1997; Holland & Thompson, 1997; Yaruss, 1998). In the ICIDH system, consequences of a disorder are examined on several levels. At the impairment level, disorders are defined in terms of a behavioral or structural abnormality. At the disability level, disorders are defined in terms of the effects of the impairment on physical function. Finally, at the handicap level, disorders are defined in terms of the effects of an impairment or disability on social function. Until very recently, formal measures designed to assess children's communication disorders have focused on the impairment level (Goldstein & Geirut). This meant that the functional and social implications of communication disorders in children have been largely overlooked in formal assessments.

To date, the most ambitious attempt at assessing (a) the functional impact of communication disorders in children and (b) treatment outcomes has been the American Speech-Language-Hearing Association's development of the Pediatric Functional Communication Measures (FCMs; American Speech-Language-Hearing Association Task Force on Treatment Outcomes and Cost-Effectiveness, 1995). The FCMs are 7-point scales that are being studied for reliability and validity when used to assess a variety of areas--including articulation/ phonology, augmentative/alternative communication comprehension, augmentative/alternative communication production, fluency, rate or rhythm, voice production, language comprehension, and language production--by clinicians who have undergone special training. The Task Force is also developing a Pediatric Functional Status Measure (FSM) to be completed by a child's teacher as a means of assessing the child's functional status at initiation of and discharge from treatment.

The need for teacher rating scales and measures of communicative competence is recognized throughout the literature on second language acquisition (Savignon, 1983), psycholinguistics (Ganguly, 1988), social psychology (Buhrmester, Furman, Wittenberg, & Reis, 1988), and augmentative and alternative communication (Light, 1989). Implicitly referring to the evaluation of communication skills, Kent (1993) stated that "appropriate assessment tools are sorely needed for each of these areas [phonological knowledge, language formulation abilities, and sociolinguistic operations] and for their integration, which occurs in the act of speaking" (p. 235). In his review article, Spitzberg (1988) noted that most of the classroom performance evaluation measures currently available "have been subjected to minimal systematic investigation" (p. 69). Measures have been developed to assess preschool children's conversational skills (Girolametto, 1997) or first-grade students' language skills (the Observational Checklist of Conversational Skills; Sanger, Aspedon, Hux, & Chapman, 1995), as have several instruments that assess the communicative competence of college-age students in the college environment (Powell & Avila, 1986). However, few standardized instruments are available for the younger student (Spitzberg), in particular, students in Grades 1 through 5.

In 1987, Prutting and Kirchner published an article de. scribing the use of a descriptive taxonomy--the Pragmatic: Protocol--to evaluate the pragmatic skills of individuals with language disorders. This protocol was designed to provide an overall communicative index for school-age children, adolescents, and adults by evaluating a range of 30 nonverbal (e.g., physical proximity), paralinguistic (e.g., intelligibility), and verbal parameters (e.g., topic selection) that affect communicative competence. In their article, Prutting and Kirchner discussed the importance of assessing a client's conversational language in order to understand how his or her speech and language difficulties affected his communicative competence. The authors also emphasized the importance of identifying a client's intact linguistic and paralinguistic abilities in order to use them in designing treatment strategies.

Speech--language pathologists look for both the communication strengths and impairments of a client. For example, when evaluating an individual with communication challenges, an SLP may record a student's low score on an articulation test but also make note of the student's compensatory strategies for communicating messages to others. This is important because individuals with the same degree of reduced intelligibility may nonetheless vary in the strategies they use to increase their communicative effectiveness. Therefore, SLPs observe and evaluate the student's competence with language (knowledge and skills) and the student's ability to compensate (use effective strategies when receiving and sending messages). The presence of such strategies can be a powerful strength for the student. Presently, SLPs struggle with the question of how to effectively measure each student's communication skills and compensatory strategies.

In order to determine which components of communicative competence to address in this scale, a literature review on the available definitions of the construct was conducted. Many definitions for communicative competence are offered in the literature; however, the most useful definitions identified in this review were those of Kent (1993), Light (1989), Savignon (1983), and Whitehead and Barefoot (1992). Although these authors addressed communicative competence for a wider range of populations, their interpretations of communicative competence and its related concepts, such as intelligibility, provided the scale's theoretical basis.

Kent (1993) was particularly influential in the construction of the model that led to the development of the TASCC. In his chapter, he proposed a diagram that consisted of the following four principle dimensions of a child's social use of speech and language:

* intelligibility

* reliance on speech

* appropriateness of communication

* use of clarification and repair strategies

Through his diagram, Kent illustrated that intelligibility is strongly related to a student's social use of language and that decreased intelligibility could limit a student's success with social communication. Additional important elements in the model used here were approach/avoidance attitude toward communication (Guitar, 1998) and nonverbal pragmatic communication (Kirchner & Prutting, 1989; Light, 1989). Due to their similarities, the elements of reliance on speech and nonverbal pragmatic communication were combined to form one dimension for the TASCC. Thus, this five-dimensional model was adapted in order to capture a rich range of behaviors associated with effective communication in the classroom.

METHOD

Preliminary Work

Table 1 lists the subscales and the definitions that were used in generating items. The original pool of 121 items was derived in part from the examination of 10 previous teacher and/or parent rating forms addressing social--behavioral issues (Achenbach, Howell, Quay, & Conners, 1991; Carlson & Stephens, 1986; Eyberg, 1992; Gresham & Elliott, 1990; High-tower et al., 1986; Kendall & Wilcox, 1979; Lahey, Stempnick, Robinson, & Tyrcler, 1978; Merrell, 1993; Sanger et al., 1995; Waksman, 1985); 2 self-rating forms addressing negative emotions concerning stuttering (Brutten & Dunham, 1989; Guitar & Grims, 1977); and 2 nonstandardized probes created by local school SLPs for addressing receptive and expressive speech and language abilities (M. H. Hanson & S. K. Keitel, personal communication, November 1996). In addition, items were developed independently. Each item on the scale fits into one of the five defined subscales of communicative competence and is rated using a 5-point scale (1 = never, 2 = seldom, 3 = sometimes, 4 = often, 5 = always).

TABLE 1. Subscales and Definitions
Subscale                            Definition

Approach/avoidance attitude         Communicative comfort level
                                    within verbal interactions
                                    and the willingness to
                                    initiate and maintain
                                    communicative interactions
                                    with others.

Intelligibility                     Ability to combine phonological
                                    segments of his or her sound
                                    system into meaningful
                                    communicative utterances and
                                    present them to others using
                                    appropriate vocal intensity
                                    inflection, rate, and
                                    articulatory clarity within
                                    communicative interactions.

Clarification/repair                Recognition of communicative
(of output)/                        breakdown and the ability to
  comprehension (of input)          adjust the expressive
                                    communicative message when it is
                                    not understood by the listener;
                                    the implementation of strategies
                                    when clarifying a misunderstood
                                    message for the listener; the
                                    ability to extract meaning
                                    from a speaker's message;  and
                                    the implementation of strategies
                                    when clarifying a misunderstood
                                    message from the speaker.

Appropriateness of communication    Ability to produce and respond
                                    to appropriate communicative
                                    messages that "fit" a
                                    particular social context.

Pragmatic/nonverbal                 Ability to use nonverbal
communication                       communicative means to express
                                    feelings, to express intentions,
                                    and to remain as an equal
                                    communicator in a communicative
                                    interaction.


Empirical Study of Test Items--Part 1

Preliminary feedback on scale format and item clarity was obtained by distributing scales to fellow graduate students and then to a group of 10 kindergarten to second-grade teachers and a group of 10 fourth- to sixth-grade teachers in two school districts in Maine. Each teacher was asked to rate an "average" student (one who was not receiving speech and language services) in the first or fifth grade, depending on the teacher's regular teaching assignment.

Three sources of information were used during the process of condensing the scale--the results of t tests examining the consistency of item performance for first versus fifth graders, the teachers' responses to scale items, and their narrative feedback. The t tests were used because differences between grades were not expected, and items that showed a significant difference were excluded (n = 10). Based on these information sources, a total of 57 items were discarded, revised, or combined, reducing the scale's length to 64 items.

Empirical Study of Test Items--Part 2

After the preliminary feedback on item clarity and scale format was obtained and the scale was revised, 200 scales were distributed to teachers in Grades 1 through 5 in Maine, Vermont, Virginia, Texas, and Idaho. The authors wanted to obtain a varied population sample during this project, and they had personal contacts in each of these states who had shown interest in participating in the scale's development. In the majority of the cases, individual teachers were not directly contacted; instead, school principals, SLPs, and special education coordinators were invited to participate via phone, mail, and/or e-mail. Each contact person received an introductory letter that briefly explained the steps involved in the study and described how the results would be used. The contact person at each school presented the opportunity to participate in this study to the teachers, and then a specific number of scales based on the number of teachers expressing the desire to participate was sent to the contact person. The school contact person passed out the scales and a brief instructional letter to the participating teachers.

The instructional letter asked each teacher to rate an individual student in his or her grade whom he or she considered to be a "normal" communicator. With the intention of increasing the study's external validity, the authors did not impose criteria for student selection for the "normal" communicative competence group. The intent was to obtain such criteria from the study. At the top of the first page of the scale, each teacher was asked to provide the student's age, grade in school, gender, and ethnicity, as well as narrative feedback about the scale's format, overall length, and item clarity.

A total of 85 scales were returned and 69 of these were used in the items analysis. Sixteen scales could not be used for the following reasons:

1. Someone other than the intended participant (e.g., a school counselor or an SLP) completed the scale (n = 2).

2. The teacher used double answers 20% or more of the time (e.g., circled "4" and "5" for one item; n = 1).

3. At least one item was left blank on the scale (n = 9).

4. The scale was completed using a student who was not in Grades 1 through 5 (e.g., kindergarten; n = 3).

5. The student's grade was not specified on the scale (n = 1).

Of the 69 students who were rated, more students overall were from the lower grades, and there were more girls than boys, although nine teachers did not give gender information. Table 2 provides specific participant information. Only 29% of the teachers provided ethnicity information about the students; about 45% of these students were from minority groups as specified by their states (e.g., Asian, African American, Filipino, Hispanic).

TABLE 2. Numbers of Participants and Teachers, by Grade
Grade      Participating     Male students      Female students
              teachers            rated              rated

1(a)            14                  5                  7

2(b)            16                  7                  8

3(c)            21                  8                  9

4(b)             9                  3                  4

5                9                  4                  5

Totals          69                 27                 33


(a) Two students' genders were not reported, (b) One student's gender was not reported. (c) Four students' genders were not reported.

RESULTS

Item Analysis

An item analysis was conducted using the Statistical Package for the Social Sciences for Macintosh, version 6.1 (SPSS, Inc., 1994) to decide which items should be kept, revised, combined, or deleted. Specifically, internal consistency reliability was evaluated using Cronbach's coefficient alpha (Feldt & Brennan, 1989), and the level of redundancy of the rating scores was examined for specific items that measured similar components of communicative competence. In addition, narrative feedback from teachers regarding item clarity was considered in the process.

Internal Consistency

In order to measure the internal consistency of the scale, the Cronbach's coefficient alpha was calculated for the total scale and for each grade and subscale. This coefficient reflects the extent to which items on the scale, taken as a whole and broken down into groups, measure components of the same construct--in this case, communicative competence (Schiavetti & Metz, 1997). When developing a device that could be used to screen children, it is recommended that the reliability be at least .80 (Salvia & Ysseldyke, 1978). The Cronbach's coefficient alpha for the total scale was .9751.

The Cronbach's coefficient alpha scores for the individual subscales and the individual subscales by grade are displayed in Tables 3 and 4. As can be seen by these coefficient alphas, the individual items were related and all measured a similar construct, which we would argue is communicative competence. Thus, these items appeared to form a cohesive scale.

TABLE 3. Cronbach's Coefficient Alpha for Each Subscale
        Subscale                             Coefficient alpha

Approach/avoidance attitude                        .7729

Intelligibility                                    .9206

Comprehension (of input)/clarification/            .8764
repair (of output)

Appropriateness of communication                   .9461

Pragmatic/nonverbal communication                  .9280


TABLE 4. Cronbach's Coefficient Alpha by Each Grade Within Each Subscale
                                              Comprehension
          Approach/                            (of input)/
          avoidance                        clarification/repair
Grade      attitude     Intelligibility        (of output)

1(a)        .7378          .8872                 .8839

2(b)        .7182          .9322                 .9129

3(c)        .7995          .9427                 .8644

4(d)        .7910          .7080                 .6586

5(e)        .8330          .9437                 .9113

                                  Pragmatic/
          Appropriateness         nonverbal
Grade     of communication      communication

1(a)         .9282                   .9135

2(b)         .9703                   .9337

3(c)         .9538                   .9500

4(d)         .8331                   .8016

5(e)         .9453                   .9471


(a) n = 12. (b) n = 15. (c) n = 17. (d) n = 7. (e) n = 9.

Redundancy Analysis

A redundancy analysis was conducted on the items that seemed to measure the same underlying component (as indicated through narrative feedback from teachers), as well as on the items that were different only because one item asked about the student's interaction with peers and the other, similarly worded item asked about the student's interaction with adults.

Three criteria were used during the process of condensing the scale:

1. If the teachers' narrative feedback concerning on the items suggested that an item was redundant, confusing, and/or in need of examples, that item was revised or deleted.

2. For item pairs that had not been labeled as redundant in teachers' narrative feedback, if the level of redundancy of paired items was high (more than half the teachers rated each item in the pair similarly), then one item was omitted and the other one was used as it was or was revised to include an idea from the omitted item.

3. Items were kept in spite of Criterion 2 if it was likely that important specific information would be generated from keeping the items separate.

During the redundancy analysis, 14 items were deleted, 7 items were combined, 11 items were reworded to increase clarity or to include specific examples, and 33 items were kept with no changes.

After revisions based on the above analyses were made, the revised scale contained a total of 50 items that were randomly listed rather than grouped within one of the five communicative competence categories (see the Appendix). The items were randomly listed to ensure that teachers would not be influenced by the title of the subscale when rating a student during future validation studies. Currently, the subscale for Intelligibility contains 8 items, the subscale for Appropriateness of Communication contains 17 items, the subscale for Comprehension (of input)/Clarification/Repair (of output) contains 8 items, the subscale for Pragmatic/Nonverbal Communication contains 10 items, and the subscale for Approach/ Avoidance Attitude contains 7 items. By condensing some of the items, the ability to generate specific information for a student is somewhat decreased; however, the scale was condensed because teachers' narrative feedback indicated that its length was a concern. Moreover, specific examples of a student's skills can be generated by the individual teachers and presented along with the information from the scale, if desired.

DISCUSSION

The literature supports the development of a tool to effectively measure the communicative competence of students in earlier grades. A rating scale used by teachers is likely to be most effective because teachers work very closely with their students and have greater access to their performance in classroom special areas. In addition, the use of such a scale would facilitate effective, efficient collaboration between teachers and SLPs on behalf of children with speech and language challenges, which is known to benefit such students (Ebert & Prelock, 1994).

The results of this study--in particular, the high internal consistency scores--indicate that the items on the TASCC are related and work well together. Once further evaluations have been done, this scale may be used to enable teachers and SLPs to recognize and analyze a student's strengths and challenges within the five areas of communicative competence. Numerous practical applications for the TASCC suggest themselves, including its use in providing an overall picture of a student's communication abilities, suggesting intervention targets based on strengths and weaknesses, and establishing baselines for evaluating treatment progress. Finally, use of the TASCC might be expected to facilitate collaboration among team members. Although validation of these uses still needs to be done, the TASCC holds considerable promise as an outcome measure that can be used for program planning and documentation of accountability (Eger, 1998).

The content of this scale fosters the use of a broad perspective by professionals when thinking about a student's communicative competencies. Many teachers who participated in the study commented that they gained a new understanding about communication, in particular, communication skills in the classroom, as they used the scale to rate a student. As part of the narrative feedback, one teacher said, "I had never considered the aspect of nonverbal communication before." A more "global" perspective of a student's communication needs to be integrated into the classroom, if it is not already there. A teacher who uses this scale to describe a student's speech and language challenges and strengths may find it easier to understand (a) the importance of incorporating speech and language intervention into the classroom environment and (b) how this can be done systematically and successfully (Ebert & Prelock, 1994; Ripich, 1989).

Limitations

One limitation of this study was the possible existence of the Rosenthal effect or experimenter bias (Schiavetti & Metz, 1997). The fact that the teacher who selected the "normal" communicating student was the same person who rated this student could have contributed to a biased rating of the student's communicative competence on each of the items. Also, the participating teachers had diverse teaching experiences and might have held varied perspectives as to what is meant by communicative competence within a classroom.

Another limitation was the low return rate from teachers. Schiavetti and Metz (1997) reported that researchers usually achieve a response rate of 30% for questionnaires. They also noted that although a 50% response rate would be adequate for analysis and reporting purposes, 60% is good and 70% is very good. The response rate for this study was 42.5%; however, some of this might have been due to problems in delivering the scale to teachers, including the lack of follow through by the individuals responsible for giving the scales to the teachers. Future studies will include more participants from a wider geographic distribution for further generalization of the scale.

These two limitations represent only a few of the limitations typically affecting any behavioral measure for which the validation process is just beginning. Validation is an ongoing process in which empirical and judgmental evidence is provided to support scientific uses of the measure. Thus, potential users will want to consider future evaluations of the TASCC in relation to the specific purpose and child for which it will be used.

Future Studies

Future studies will address the cultural component of this scale. Research has indicated that a mismatch between the rater's dialect and that of the student can have a negative effect on perceptions of a student's communicative abilities. For example, Ryan and Sebastian (1980) and Miller (1975), discussed in Powell and Avila (1986), found that accented speech was evaluated more negatively than "standard" English. The authors have chosen to address ethnicity issues by including the following statement in the directions to the teachers using the scale in the future: The ratings on some of the items may be affected by the student's culture; thus, the teacher rating the student should be knowledgeable of the individual student's cultural norms around communication before attempting to rate the student.

The TASCC has potential future implications for a variety of areas of speech and language, such as in work with individuals who have autism, phonological disorders, or stuttering or those who use augmentative and alternative communication systems. Future studies will focus on validating the scale in accordance with the previously stated practical applications. During these studies, the teachers will be informed that they may substitute the words developmentally appropriate for age appropriate on certain items when rating a student whose developmental level is lower than his or her chronological level. This would allow teachers to measure the student's progress according to his or her developmental level over time. Studies currently underway are focusing on the use of the scale to compare students with decreased communicative competence and those with "normal" communicative competence. By demonstrating that contrasting groups differ in performance on the scale, these studies will help contribute to the construct validation of the measure, an ongoing and never-ending process for any measure of this kind. These and numerous other studies will be required to help us understand (a) the extent to which the measure can be used for the various purposes for which it has been designed and (b) for which groups of students its use would be most appropriate.

AUTHORS' NOTE

We wish to thank the professors and graduate students in the University of Vermont's Communication Sciences Department for their help in gathering participants for this study across the nation; Allen Howard for his help in statistics; and teachers, SLPs, and school principals in Maine, Vermont, and Texas for participating in the different phases of this study.

REFERENCES

Achenbach, T. M., Howell, C. T., Quay, H. C., & Conners, C. K. (1991). National Survey of Problems and Competencies among four to sixteen year olds. Monographs of the Society for Research in Child Development, 56 (3, Serial No. 225).

American Speech-Language-Hearing Association Task Force on Treatment Outcomes and Cost-Effectiveness. (1995). User's guide, Phase 1-Group II: National treatment outcome data collection project. Rockville, MD: Author.

Brutten, G., & Dunham, S. (1989). The Communication Attitude Test: A normative study of grade school children, Journal of Fluency Disorders, 14, 371-377.

Buhrmester, D., Furman, W., Wittenberg, M. T., & Reis, H. T. (1988). Five domains of interpersonal competence in peer relationships. Journal of Personality and Social Psychology, 55(6), 991-1008.

Carlson, P., & Stephens, T. (1986). Cultural bias and identification of behaviorally disordered children. Behavioral Disorders, 11(3) 191-199.

Curlee, R. (1993). Evaluating treatment efficacy for adults: Assessment of stuttering disability. Journal of Fluency Disorders, 18, 319-322.

Ebert, K., & Prelock, P. (1994). Teachers' perceptions of their students with communication disorders. Language, Speech, and Hearing Services in Schools, 25, 211-214.

Eger, D. (1998). Outcomes measurement in the schools. In C. Frattali (Ed.), Measuring outcomes in speech-language pathology (pp. 438-452). New York: Thieme.

Eyberg, S. (1992). Parent and teacher behavior inventories for the assessment of conduct problem behaviors in children. In L. Vandecreek, S. Knapp, & T. L. Jackson (Eds.), Innovations in clinical practice: A source book (Vol. 11, pp. 261-270). Sarasota, FL: Professional Resource Press.

Feldt, L. S., & Brennan, R. L. (1989). Reliability. In R. L. Linn (Ed.), Educational Measurement (3rd. ed., pp. 105-146). New York: American Council on Education and Macmillan.

Frattali, C. (1998). Measuring outcomes in speech-language pathology. New York: Thieme.

Ganguly, S. R. (1988). On measuring communicative competence. Psycho-Lingua, 18(1), 23-32.

Girolametto, L. (1997). Development of a parent report measure for profiling the conversational skills of preschool children. American Journal of Speech-Language Pathology, 6(4), 25-32.

Goldstein, H., & Geirut, J. (1997). Outcomes measurement in child language and phonological disorders. In C. Frattali (Ed.), Measuring outcomes in speech-language pathology (pp. 406-437). New York: Thieme.

Gresham, E, & Elliott, S. (1990). Social skills rating system/Social skills questionnaire. Circle Pines, MN: American Guidance Service.

Guitar, B. (1998). Stuttering: An integrated approach to its nature and treatment. Baltimore: Williams & Wilkins.

Guitar, B., & Grims, S. (1977, November). Developing a scale to assess communication attitudes in children who stutter. Poster session presented at the American Speech-Hearing-Language Association Convention, Atlanta, GA.

Hightower, A., Work, N. C., Cowen, E. L., Lotyczewski, B. S., Spinell, A. P., Guare, J. C., & Rohrbeck, C. (1986). The Teacher-Child Rating Scale: A brief objective measure of elementary children's school problem behaviors and competencies. School Psychology Review, 15(3), 393-409.

Holland, A. L., & Thompson, C. K. (1997). Outcomes measurement in aphasia. In C. Frattali (Ed.), Measuring outcomes in speech-language pathology (pp. 245-266). New York: Thieme.

Kendall, P., & Wilcox, L. (1979). Self-control in children: Development of a rating scale. Journal of Consulting and Clinical Psychology, 47(6), 1020-1029.

Kent, R. (1993). Speech intelligibility and communicative competence in children. In A. P. Kaiser & D. B. Gray (Eds.), Enhancing children's communication research foundations for intervention (pp. 223-239). Baltimore: Brookes.

Kirchner, D., & Prutting, G. (1989). Criteria for communicative competence. Seminars in Speech and Language, 10(1), 42-49.

Lahey, B. B., Stempnick, M., Robinson, E. J., & Tyrcler, M. J. (1978). Hyperactivity and learning disabilities as independent dimensions of child behavior problems. Journal of Abnormal Psychology, 87(3), 333-340.

Light, J. (1989). Toward a definition of communicative competence for individuals using augmentative and alternative communication systems. Augmentative and Alternative Communication, 5(2), 137-144.

McCauley, R. J., & Swisher, L. (1984). Uses and misuses of norm-referenced tests in clinical assessment: A hypothetical case. Journal of Speech and Hearing Disorders, 49, 338-348.

Merrell, K. (1993). Using behavior rating scales to assess social skills and antisocial behavior in school settings: Development of the School Social Behavior Scales. School Psychology Review, 22(1), 115-133.

Powell, R., & Avila, D. (1986). Ethnicity, communication competency and classroom success: A question of assessment. The Western Journal of Speech Communication, 50, 269-278.

Prutting, C., & Kirchner, D. (1987). A clinical appraisal of the pragmatic aspects of language. Journal of Speech and Hearing Disorders, 52, 105-119.

Ripich, D. (1989). Building classroom communication competence: A case for a multi-perspective approach. Seminars in Speech and Language, 10(3), 231-240.

Salvia, J., & Ysseldyke, J. E. (1978). Assessment in special and remedial education. Boston: Houghton Mifflin.

Sanger, M., Aspedon, M., Hux, K., & Chapman, A. (1995). Early referral of school-age children with language problems. Journal of Childhood Communication Disorders, 16(2), 3-9.

Savignon, S. (1983). Definitions of communicative competence. In S. Savignon (Ed.), Communicative competence: Theory and classroom practice texts and contexts in second language learning (pp. 1-49). Reading, MA: Addison-Wesly.

Schiavetti, N., & Metz, D. (1997). Evaluating research in communicative disorders (3rd ed.). Boston: Allyn & Bacon.

Spitzberg, B. (1988). Communication competence: Measures of perceived effectiveness. In C. Tardy (Ed.), A handbook for the study of human communication: Methods and instruments for observing, measuring, and assessing communication processes (pp. 67-105). Norwood, NJ: Ablex.

SPSS, Inc. (1994). Statistical package for the social sciences--Macintosh version 6.1 [Computer software]. Chicago: Author.

Waksman, S. (1985). The development and psychometric properties of a rating scale for children's social skills. Journal of Psychoeducational Assessment, 3, 111-121.

Whitehead, B., & Barefoot, S. (1992). Improving speech production with adolescents and adults. The Volta Review, 94, 119-134.

World Health Organization. (1980). International classification of impairments, disabilities, and handicaps: A manual of classification relating to the consequences of disease. Geneva: World Health Organization.

Yaruss, S. (1998). Describing the consequences of disorders: Stuttering and the International Classification of Impairments, Disabilities, and Handicaps. Journal of Speech, Language, and Hearing Research, 41(2), 249-257.

APPENDIX: TEACHER ASSESSMENT OF STUDENT COMMUNICATIVE COMPETENCE (TASCC)
Student's:            Age            Gender            Ethnicity

Below are a series of items that describe a student's communicative
competence. Use the following scale to rate a student in your grade
whom you consider to have communicative competence issues. For each
item, circle the number that best describes the student's
communication. Please answer each item as well as you can, even if
the item does not seem to apply to the student.

1 = Never    2 = Seldom    3 = Sometimes    4 = Often   5 = Always

1) Student remains attentive when             1    2    3    4    5
   others communicate with him
   or her

2) Student verbally relates thoughts          1    2    3    4    5
   in an age-appropriate, adults
   meaningful manner to adults

3) Student adjusts style and content          1    2    3    4    5
   of speech according to  and
   communication partner and situation

4) Student appears to nonverbally             1    2    3    4    5
   relate feelings in an age-appropriate,
   meaningful manner (e.g., facial glare,
   smile)

5) Student demonstrates age-appropriate       1    2    3    4    5
   nonverbal requests for message
   repetition (e.g., makes a "puzzled"
   face

6) Student participates in age-appropriate    1    2    3    4    5
   turn-taking in conversations and class
   discussions

7) Student demonstrates age-appropriate       1    2    3    4    5
   verbal requests for message repetition
   (e.g., "Could you say that again?" or
   "What?")

8) Student uses appropriate voice             1    2    3    4    5
   infection when speaking (e.g.,
   intonation with questions)

9) Student uses appropriate eye contact       1    2    3    4    5
   when speaking to adults

10) Student gets the listener's attention     1    2    3    4    5
    before the student introduces a topic

11) Students uses age-appropriate opening     1    2    3    4    5
    and closing communication comments in
    conversations with peers (e.g.,
    "Hello;"See you later.)

12) Student's speech is understandable        1    2    3    4    5
    even when the topic is unknown

13) Student participates in story             1    2    3    4    5
    -description/retell interactions

14) Student verbally relates thoughts in      1    2    3    4    5
    an age-appropriate, meaningful manner
    to peers

15) Student sticks up for his or her own      1    2    3    4    5
    views when confronted by group
    pressure

16) Student's overall speech is               1    2    3    4    5
    understandable (e.g., clear voice,
    clear articulation)

17) Student nonverbally expresses             1    2    3    4    5
    frustration toward peers, when
    appropriate

18) Student responds within an appropriate    1    2    3    4    5
    time frame to remarks, questions,
    request

19) Student joins conversations with peers    1    2    3    4    5
    easily

20) Student uses vocabulary that is           1    2    3    4    5
    relevant to the conversation

21) Student appropriately engages in group    1    2    3    4    5
    discussions

22) Student uses appropriate rate of          1    2    3    4    5
    speech for situation

23) Student initiates topics of               1    2    3    4    5
    conversation in one-to-one situations
    with adults
24) Student intiates topics of                1    2    3    4    5
    conversation in one-to-one situations
    with peers

25) Student adjusts vocal intensity to        1    2    3    4    5
    account for distance and noise
    variables

26) Student freely volunteers answers to      1    2    3    4    5
    questions in class

27) Student uses speech effectively in        1    2    3    4    5
    directing peer's actions, when
    intended

28) Student's speech is understood by         1    2    3    4    5
    unfamiliar listeners

29) Student uses appropriate eye contact      1    2    3    4    5
    when speaking to peers

30) Student uses age-appropriate humor        1    2    3    4    5
    within peer conversations

31) Student uses age-appropriate verbal       1    2    3    4    5
    communication to gain attention

32) Student nonverbally expresses             1    2    3    4    5
    frustration toward adults, when
    appropriate

33) Student uses a variety of                 1    2    3    4    5
    age-appropriate (or better)
    vocabulary words

34) Student seems to understand               1    2    3    4    5
    age-appropriate humor within peer
    conversation

35) Student clarifies and/or rephrases        1    2    3    4    5
    when verbal communication is not
    understood by the listener

36) Student uses age-appropriate (or          1    2    3    4    5
    better) sentence length when
    answering questions in class

37) Student is able to shift to               1    2    3    4    5
    different topics within conversations

38) Student links his or her words together   1    2    3    4    5
    with age-appropriate (or better)
    grammatical structures

39) Student follows 3-step instructions       1    2    3    4    5
    with minimal need for repetitions or
    visual cues

40) Student's speech is understood even       1    2    3    4    5
    when the speech becomes more complex
    (e.g., longer sentences, change in
    topic)

41) Student verbally or nonverbally           1    2    3    4    5
    indicates that he or she understands
    speaker's message

42) Student is able to integrate              1    2    3    4    5
    information presented auditorily
    (e.g., lessons, stories, a sequence
    of directions) and comprehend the
    meaning

43) Student identifies characters/people      1    2    3    4    5
    in conversations

44) Student uses age-appropriate              1    2    3    4    5
    (or better) sentence length when
    having a conversation

45) Student uses the environment to get a     1    2    3    4    5
    message across when the student's
    verbal communication is not understood
    (e.g., points to relevant objects or
    people)

46) Student seems to understand nonverbal     1    2    3    4    5
    communication (e.g., gestures)

47) Student uses age-appropriate nonverbal    1    2    3    4    5
    communication to gain the attention of
    adults

48) Peers and adults seem to understand       1    2    3    4    5
    what the student says to them

49) Student interacts with a variety of       1    2    3    4    5
    peers and adults

50) Student uses age-appropriate nonverbal    1    2    3    4    5
    communication to gain the attention of
    peers(e.g., wave, gentle tap)


Copyright 1998 by A. R. Smith, R. McCauley, and B. Guitar. Used with permission.

Ann R. Smith, SLP, MS, is at the Northeast Hearing and Speech Center in Portland, Maine, and is completing her third year on the Vermont Rural Autism Project. Rebecca McCauley, PhD, is a professor of communication sciences at the University of Vermont and has authored numerous publications addressing measurement issues in children's communication disorders, particularly language disorders. Barry Guitar, PhD, a professor of communication sciences at the University of Vermont, is the author of a widely used text on stuttering as well as numerous related research publications. His involvement in this project is based on his interest in studying and improving treatment methods for school-age children who stutter. Address: Ann Smith, 62 Old Farm Road, South Portland, ME 04106.
COPYRIGHT 2000 Sage Publications, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2000 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Smith, Ann R.; McCauley, Rebecca; Guitar, Barry
Publication:Communication Disorders Quarterly
Article Type:Statistical Data Included
Date:Sep 22, 2000
Words:5988
Previous Article:From the Editor.
Next Article:Comparing Traditional and Collaborative Settings for Language Intervention.
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters