Printer Friendly

Assessment and instruction of at-risk Hispanic students.

Assessment and Instruction of At-Risk Hispanic Students

ABSTRACT: This article summarizes some of the key limitations underlying existing standardized test use with Hispanic students; it also explores recent developments in testing, teaching, and learning research that suggest major improvements in the way testing and assessment can be developed as tools for promoting Hispanic and other students' learning. Assisted performance and dynamic assessment are discussed in the context of individual education plans and a test-train-test cycle. * There are two primary limitations in existing testing practices for Hispanic students. First, the validity and reliability of tests may be reduced because of factors such as limited language proficiency of students in the language of a test, lack of familiarity with the content of test items, lack of social and cultural sensitivity on the part of test administrators, and students' lack of familiarity with test-taking strategies (Duran, 1988).

A second limitation is that students' performance on aptitude and achievement tests almost always yields little prescriptive information for instructional interventions, and this is potentially damaging to students who, as a result, are tracked into remedial skill-development activities that do not stimulate the development of students' self-image as competent learners (Cummins, 1984).


Attenuated Psychometric Properties

The first of these sets of concerns and accompanying guidelines for improved test use are discussed in the Standards for Educational and Psychological Testing (American Educational Research Association, 1985), particularly Chapter 13, "Testing Linguistic Minorities." Chapter 13 points out that "for a non-native English speaker, and for a speaker of some dialects of English, every test given in English becomes, in part, a language or literacy test." It can be further added that every test requires that examinees understand specialized uses of language for the sake of test taking itself. For example, examinees are called on to understand terms in the instruction to tests, such as "test item" and "multiple choice problem."

Hispanic students are a heterogeneous population regarding language background, proficiency in English and Spanish, and sociocultural experiences. The majority of Hispanic elementary school children in the United States come from homes where parents, extended family members, or siblings speak Spanish. Because of the previous and current exposure of these children to communities relying on English or Spanish usage, as well as familial patterns in use of the two languages, the children vary considerably in their proficiency in English and Spanish.

A complication in evaluating the language characteristics of Hispanic students is that the notion of "language proficiency" is itself too simplistic to capture a student's ability to use language in interaction. This shortcoming is connected to the issue of cultural knowledge and its pervasive influence on how people interpret situations and guide their communicative behavior as a result. Over the past two decades, accumulating research by sociolinguists and ethnographers of communication has suggested that the notion of "communicative competence" should replace "language proficiency" as a generic construct describing a person's ability to use language. Standardized language proficiency tests tend to emphasize assessment of a student's mastery of particular vocabulary terms, and knowledge of appropriate grammatical structures. Further, these tests rely heavily or exclusively on pencil-and-paper assessments that are very different from natural communicative activities (Duran, 1988). In contrast, researchers investigating a student's communicative competence have examined how well a student manages interaction with teachers and other students in a bona fide educational-task setting. Research has suggested that the successful functioning of Hispanic and other ethnic-minority students can be affected dramatically by interactional competencies that extend beyond a knowledge of the structural features of language (Goldman & Trueba, 1987). These interactional competencies encompass such skills as knowing when and how to respond to a teacher's questions, how to ask for clarifications of information, and the appropriate demeanor for using language in the activities of a classroom.

One of the most interesting discoveries has been that a student's classroom communicative competence cannot be understood as a characteristic of the individual student per se. It may be affected by the communicative strategies used by other students and a teacher and differences in the social relations among these interlocutors. Carrasco, Vera, and Cazden (1981), for example, discussed in detail evidence that a Hispanic first-grade student was capable of teaching a language arts spelling task to a peer, but was incapable of responding to the teacher's questions about this task.

A naive but compelling solution to the problem of improving language assessment would be to develop and implement "tests of communicative competence." However, this enterprise has proven immensely difficult; only a few models of successful instruments have been developed, and these have not been widely implemented (Rivera, 1984).

Instructional Validity

As mentioned earlier, there is little explicit prescriptive information available from standardized aptitude and achievement tests to guide instructional interventions. Standardized intelligence tests such as the Wechsler Intelligence Scale for Children-Revised (WISC-R), language proficiency considerations aside, at best provide a rough numerical measure of a student's readiness to learn; but they are not designed to measure how ready a student is to learn specific new skills. Similarly, most standardized achievement tests can indicate what academic content a student has yet to master, but not specifically what a student is ready to learn next.

Among existing achievement-testing procedures, curriculum-based assessment techniques are an improvement over other achievement tests. They are designed to generate explicit diagnoses of materials that a student has yet to master, and they can be combined with specific instructional strategies that go on to expose students to new material. These new materials can be selected from an analytical hierarchy of skills and contents that a student is expected to master, and students' progression through a hierarchy can be assessed quantitatively by a variety of measurement methods (Howell & Moorehead, 1987). Curriculum-based assessments and instruction, however, may not attend intimately to the learning process itself because they are not based on explicit cognitive process models of learning that offer "on-line" advice to students during the very act of learning.


The widespread use of standardized aptitude and achievement tests is coupled with a belief that a teacher or school psychologist can make use of test score information in making decisions that positively affect the learning of students. As discussed by Cummins, Rueda, Figueroa and others in this issue, test scores are used primarily to sort students into those who require special education services and those who do not. The individual education plan (IEP), developed for those students classified as eligible for special education services, is presumed to provide an adequate analysis of the learning needs of a student. Most typically, a student's low test scores are nothing more than a corroboration that he or she has learning needs; there is no immediate connection in the IEP between the specific skill deficiencies shown by the test scores and a way these scores can affect the learning intervention specified in the IEP. Perhaps this is reasonable from a practical perspective. After all, it is probably presumed that school learning is not the same as performing on a test--though test performance can be affected by school performance. As Cummins (1984) pointed out, however, the damaging pedagogical conclusion that emerges so often is that students classified as learning disabled are deemed incapable of benefitting from instruction that simulates the acquisition and use of higher order thinking skills.

Assisted Performance

Tharp and Gallimore (1988) have suggested a strong, criterial definition of teaching based on Vygotskian theory, which suggests a very different notion of assessment as part of effective instruction. Drawing on their extensive research in teaching reading skills to at-risk Hawaiian children, Tharp and Gallimore theorized that teaching can occur only when the very act of teaching leads a teacher or more capable other to assist the learning performance of a student during the course of a learning activity. Teaching happens only when a student is assisted in accomplishing previously unattained elements of a learning task. This conception of teaching as assisting the performance of a learner requires, first, that the teacher assess very carefully the learning performance of a student and, second, that the teacher offer assistance to the learner, aiding the learner in attaining new competencies that otherwise would be impossible.

Tharp and Gallimore's definition of teaching is based on the Vygotskian notion that there are progressive stages in the development of skills by a learner (Vygotsky, 1978). Initially a learner is capable of learning only when a teacher or more capable other actively supports learning performance by providing useful hints and cues as the student seeks to perform a task. Gradually, the student internalizes these hints and cues and generates them autonomously in an intermediate stage of learning. Later, as the student realizes full competency, self-generated hints and cues are absent; the student then performs a criterion task largely in an automatic mode.

As mentioned earlier, standardized aptitude and achievement tests can provide teachers with general information that can be connected to whole-group instructional intervention. But if Tharp and Gallimore (1988) and Cummins (1984) are correct, this "transmission" mode of teaching is likely to aid only those students whose learning is adequately supported by self-generated cues and hints or whose new learning can be based on well-automatized skills. For educators following traditional and professionally accepted guidelines for standardized testing practices, it is impossible to train students using hints and cues to solve test items and thereby assist new learning.

Dynamic Assessment

In recent years, a testing and cognitive training procedure termed dynamic assessment has sought to combine the assessment of a student's readiness to learn with instruction in cognitive content and cognitive skill areas (see Lidz, 1987, for a review of the field). Prominent researchers in this area, cited in the Lidz volume, include Embretson, Feuerstein, Bransford, Budoff, and Campione and Brown. Work in this field is consonant with Tharp and Gallimore's notion of teaching and learning as "assisted performance" and with Vygotsky's notion of probing a student's capability to advance through his or her zone of proximal development for a learning task.

Dynamic assessment approaches all embody a fundamental test-train-test cycle, though there can be great variation in the target skills (and content areas for learning), as well as in testing and training procedures. Further, dynamic assessment approaches rely on a process-oriented model of learning that involves explicit analyses of how deficiencies in learning performance reflect existing competencies and the readiness to develop new, necessary competencies for successful learning. Such an analysis is important. It requires that a researcher or cognitive trainer specify a theory or model for skill development; and this model can prescribe hints or cues that can aid a student, given his or her present performance on a learning task. Unlike typical standardized aptitude and achievement pencil-and-paper testing procedures, a student's response to a criterion problem is evaluated on-line; and the student is provided immediate feedback on how to improve performance.

Dynamic assessment approaches also share the philosophy that a student's learning will be maximally aided by assessing how much help will facilitate learning; here, the educator's goal is to increase a student's ability to self-regulate problem-solving behavior. Too much specific help and advice will not assist learning because it will not help the student develop a better internalized model of how to perform a task. On the other hand, cues that offer too little help result in problem-solving behavior that is ad hoc and uncoordinated. Effective hints and cues permit the student to improve problem-solving performance by systematically extending, modifying, or replacing existing problem-solving strategies.

There appear to be two general approaches to the conduct of dynamic assessment. One emphasizes clinical probing of a student's readiness to master new skills, as with the Learning Potential Assessment Device (LPAD) (Feuerstein, 1979). The cognitive trainer relies on clinical judgment in diagnosing the readiness of a student to learn and clinically determines which hints and cues can promote new learning. A second approach relies on a preestablished hierarchy of cues and hints that represent skill levels in a problem-solving task. In this approach, each response of a student to a problem is matched objectively against the learning hierarchy; and a student is provided an appropriate preestablished cue or hint.

The two approaches for conducting a dynamic assessment have contrasting implications for improving testing and instruction. The clinical approaches can be more easily integrated into everyday classroom interaction and activities, such as reading lessons. For example, the small-group, reading-comprehension training procedure known as "reciprocal teaching" involves clinical assessment of a student's mastery of comprehension strategies coupled with modeling and cues on improving strategies. Research involving the use of reciprocal teaching with Hispanic at-risk students was reported by Padron (1987). She found that the reciprocal teaching procedure was one of two reading intervention strategies that led to a significant improvement of standardized reading achievement test scores among students instructed using the techniques, relative to a comparison group of students who did not receive a reading strategies intervention.

The dynamic assessment approach that emphasizes the use of preordered, strictly defined cues and hints is less well adapted to everyday school activities that allow more free-ranging interaction. This approach, however, has some important advantages that are not easily obtained in the clinical approach. First, the approach readily allows ordinal-level measurement of learning potential. One can count the number of hints and cues required for a student to respond effectively to a problem-solving task. Because hints and cues are offered in the order from "most general" to "most specific," the lower the number of hints and cues needed by students, the greater their learning potential. Further, because hints and cues are offered in a standardized order, it becomes possible to compare the readiness to learn of different students on the same problem-solving task.

A third benefit of the preordered hint-and-cue approach is that it lends itself to construct-validity research investigating whether "dynamic measures" assess skills not represented by standardized intelligence and achievement tests. Campione and Brown (1987) and Bransford, Delclos, Vye, Burns, and Hasselbring (1988) reported statistically significant improvements on criterion problem-solving tasks. These improvements were accounted for by ordinal measures of readiness to learn that were not predictable from standardized aptitude tests.

Another characteristic of dynamic assessment that is important to consider is whether the assessment is geared to (a) general cognitive aptitudes or (b) knowledge and skills in specific academic areas. For example, the LPAD is oriented toward assessing students' readiness to learn general mental operations, such as analogical reasoning, logical multiplication, and seriation (Feuerstein, 1979). By contrast, other researchers have begun to focus intensively on dynamic assessment and training of skills in specific content domains, such as arithmetic (Campione & Brown, in press). The trend toward increasing emphasis on specific academic skill areas is consistent with syntheses of research indicating the limited success of programs to train general thinking skills versus greater success in training specific content skills (Resnick, 1987).


The trends described here suggest that assessment will serve a more effective role if it is able to assist teachers and resource specialists in guiding the everyday instruction of pupils. If assessors were to become "educational diagnosticians" of students' learning potential along the lines described in this article, they would aid teachers and resource specialists in creating interventions for students that would have demonstrated achievement benefits for students. Along these lines, Tharp and Gallimore (1988), in their account of assisted-performance theory, described how a sustained "triadic" relationship between student, teacher, and teacher trainer can benefit both the student and teacher--provided the trainer has sufficient knowledge of how to implement effective instruction. To understand this new paradigm, assessment personnel should undergo new forms of training that draw on cognitive science and sociolinguistic research.

RICHARD P. DURAN is Associate Professor and Director of the University of California Linguistic Minority Research Project, Graduate School of Education, University of California, Santa Barbara.
COPYRIGHT 1989 Council for Exceptional Children
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1989 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Duran, Richard P.
Publication:Exceptional Children
Date:Oct 1, 1989
Previous Article:Hispanic students at risk: do we abdicate or advocate?
Next Article:Language disorder or difference? Assessing the language skills of Hispanic students.

Related Articles
Hispanic students at risk: do we abdicate or advocate?
Complex math stumps students.
Hispanics must raise the bar. (point of view).
Playing catch-up with Hispanic students: Hispanic student achievement is low and their numbers are high. Getting to know them, their challenges, and...
By the numbers on Hispanic students: a data bank on education trends for district leaders.
Promoting equity and accountability in multicultural classrooms: pre-service teachers use student work to evaluate the impact of standards-based...
No Hispanic Student Left Behind: The Consequences of "High Stakes" Testing.
Assess more to improve learning: regular measurement of learning and the subsequent adjustment to instruction promote higher levels of student...

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters