Printer Friendly

The RISK screening test: using kindergarten teachers' ratings to predict future placement in resource classrooms.

ABSTRACT: Teacher ratings from four consecutive cohorts of kindergarten students were used to establish a prediction function by which children who ultimately received special education services in the form of resource-class placement were discriminated from children who remained solely in regular education classrooms. All five factors measured by the RISK scale were significantly related to future school performance, but items that assessed child ability, current performance, and teacher investment were most predictive of eventual special-class placement. Overall accuracy for the screening measure was 94. 13%, with 1, 194 out of 1,269 children accurately selected to their appropriate educational placement.

* The early identification of children who are likely to require special educational assistance has long been a priority of applied researchers (Butler, Marsh, Sheppard, & Sheppard, 1985; deHirsch, Jansky, & Langford, 1966; Feshbach, Adelman, & Fuller, 1974; Mercer, Algozzine, & Trifiletti, 1988). The impetus for early identification is based on the assumption that both developmental and educational intervention are likely to be more effective if they occur before the child's deficiencies become massive and are compounded by the numbing effects of extended school failure (Satz & Fletcher, 1979). Vacc, Vacc, and Fogelman (1987) reported that Project Child found that half of their children who were later identified as having disabilities could have received more effective remediation had their difficulties been diagnosed earlier in development. Muehl and Forell (1973) indicated that, regardless of the type and amount of subsequent intervention, earlier diagnosis was associated with better school performance after 5 years.

The adequacy of current screening and readiness measures for young children is much in question. Joiner (1977) reviewed 151 tests and procedures used by school districts in New York and found only 16 that could be considered marginally appropriate. More recently, the Michigan Department of Education (1984) judged that only 10 of the 111 tests and procedures being used within the state for screening and readiness were suitable for those purposes. Even widely used screening tests, such as the Gesell, have been criticized severely for their lack of psychometric integrity (Meisels, 1985, 1987). Satz and Fletcher (1988) have suggested that the procedures used to establish the validity of the Denver and the DIAL-R, two other common screening measures, are inadequate and inappropriate and misrepresent the utility of the instrument.

Our efforts to establish useful early identification procedures have been hampered by both theoretical and methodological difficulties. At a theoretical level, we remain undecided concerning which aspects of the child's functioning are most predictive of subsequent failure. Should we use a readiness model that relies primarily on the assessment of reading correlates such as perceptual-motor and linguistic skill s (Mann, 1984)? Or should we include other key variables such as task orientation, motivation, and social skills that interact to influence school success and failure (Adelman & Feshbach, 1971)? Moreover, is it more profitable to use identification procedures based on standardized psychometric tests (Vacc, Vacc, & Fogelman, 1987), or should we rely on teachers' ratings of students' characteristics (Fletcher & Satz, 1984)?

A major methodological problem in devising early identification measures is how best to assess predictive validity. Typically, researchers collect a variety of measures during kindergarten which are subsequently used to predict 1st- or 2nd-grade reading achievement based on the results of standardized tests. Results are summarized through either correlational or classificational approaches (Butler, Marsh, Sheppard, & Sheppard, 1985). The correlational approach yields multiple correlations between predictor and criterion variables, which indicate the amount of variance in reading scores that can be explained by the screening measures. This approach provides evidence of the relationship of screening tools to subsequent reading level across all levels of reading ability, but it does little to suggest which children in particular are at risk for school failure (Lichtenstein, 1981).

Assuming that screening tests have the explicit purpose of assigning individuals a status (at risk or not at risk), classification approaches to predictive validity establish a cutoff score on the criterion measure below which the child is said to be at risk (e.g., below the 25th percentile on standardized reading test) and then attempt to use the screening results to identify subjects that ultimately fall into the risk group. Predictions are usually generated through discriminant function analysis, which differentially weights the various screening variables to maximize the differences between risk and nonrisk groups on a linear vector of the original items (Pedhazur, 1982). Predictive validity is then judged in terms of the proportion of subjects whose group membership (at risk or not at risk) is correctly identified, as well as the pattern of false positive and false negative identifications (see Mercer, Algozzine, & Trifiletti, 1988, for further discussion).

Although classification matrixes provide useful information about the predictive validity of screening measures, they must be analyzed carefully. Many studies present high accuracy rates that are misleading with regard to the value of the instrument. Because the number of children who are at risk for educational difficulties represents a small proportion of the entire school population, it is possible that a screening measure that never identified any child as being at risk could still have a respectable overall accuracy of prediction. Fletcher and Satz (1984) reviewed several studies where screening measures reported total accuracy rates of over 70% while failing to identify two out of every three children who ultimately required assistance.

A second concern in using discriminant function approaches to prediction is the stability of the weights given to variables in determining the prediction equation. To ensure the validity of the prediction equation, multiple samples must be employed. One sample serves to calibrate the equation, and the resulting weights are then used to make predictions on a second sample. Only when the accuracy of the prediction equation is comparable across the two independent samples can it be said to have population validity.

The following report presents the results of a 6-year study in which four cohorts of children were rated by teachers on the RISK scale (Coleman & Dover, 1989) during the second semester of their kindergarten year. Students were then tracked while they were in Grades 3 through 6 and divided into two groups who either had or had not received special education services within a resource room. These placements were then compared to predicted placements (regular or resource class) made from the screening measure using a prediction/performance matrix adjusted for the conditional probabilities of membership in each group. Two cohorts were used as a calibration sample from which to generate the prediction equation. The remaining two cohorts served as a target sample to judge the consensual validity of the prediction equation.

METHOD

Subjects

Subjects were kindergarten students in two school districts adjacent to a major metropolitan area in the Southeast. One school district served a small community with a population of about 30,000, and the second school district enrolled children from both rural and suburban areas. The kindergarten program was offered to families by the school districts but was not mandatory. Teachers rated a total of 2,306 children over a 4-year period beginning in the spring of 1980. Each cohort represented the entire kindergarten population for that year. The actual number of students rated year to year varied as a function of school enrollment patterns but ranged from a low of 533 in 1980 to a high of 713 in 1983. The 1983 enrollment jump resulted from an expansion of kindergarten services by the school districts.

In the fail of 1986 we returned to the school districts to locate those students who had been rated on the RISK scale while in kindergarten. By this time, the four cohorts of subjects were attending Grades 3-6. Two procedures were used to locate students. The special education students were located through a computerized database maintained by the school districts. This database contained 225 students from the original kindergarten samples who had received some type of special education service other than (or in addition to) speech therapy since their kindergarten year. In some cases, these students had transferred out of the districts since kindergarten, but their academic records remained; so they were included in the study. The second procedure required reviewing the master student lists of all elementary schools in the two districts; this review yielded 1,081 students without disabilities from the original samples who were still in the school systems.

School records were available for a total of 1,306, or 56.5% of the children in the original samples. As would be expected, this percentage varied as a function of the academic year in which the children were originally rated. Only 51% of the subjects from the 1979-80 academic year remained within the school districts, whereas over 69% from the 1982-83 academic year were still enrolled. This sample was 51.5% male and 83% white; virtually all minority students were black.

The special education sample of 225 students was predominantly male (66%) and white (70.6%). Over 83% (n= 188) of these students were attending resource classrooms for 1 or 2 hr daily; the remaining students (n= 37) were children with moderate to severe disabilities who were receiving more specialized services. A decision was made to include only the resource students in the present study. This was based on two factors. First, because the subjects with more severe disabilities had not been a part of the school districts' testing program, there were no group achievement test data available on them. Second, we were concerned that including these subjects might artificially inflate the prediction accuracy of the rating measure. Resource students were primarily labeled as having learning disabilities (LD) (85%), although there were small groups of children with emotional disturbance (ED), behavior disorders (BD), and mild mental retardation (MR). We made no attempt to partition these groups because screening measures are unsuitable for differential diagnosis, and the sensitivity of the RISK scale is based on the presence of school difficulties without regard to their origin. The resource students with mild disabilities were identified by the participating school districts using procedures mandated by the Tennessee State Department of Education. Eligibility criteria varied by type of disability: LD children were identified based on a discrepancy between intelligence and achievement; MR children, based on intelligence and adaptive behavior; ED and BD children, based on intellectual, academic and behavioral indexes; and sensorially impaired children, based on assessments of sensory acuity and academic needs.

Instrument

Kindergarten teachers rated each of their students on 43 items that required ability estimates in several areas and teachers' judgments as to how often various facets of the child's behavior occurred. Teachers were asked to make their ratings, judging each child in relationship to other children in the same kindergarten classroom. Each item was rated on a scale ranging from 1 to 6. Some items were worded positively, others negatively. In scoring the ratings, negative items were recoded so that higher numbers always indicated more positive ratings.

The 43 items were first converted to Z scores based on individual classrooms and then analyzed using principal components factor analysis following by an oblique rotation. Five factors containing 34 items were retained, using as a criterion the factor having an eigenvalue of at least 1. These factors accounted for 69.69% of the variance in the original matrix. Cronbach alphas for the factors and the entire 34 items are as follows: school competence, .964; task orientation, .920; social, .894; behavior, .63; motor, .61; and total score, .951. A11 subsequent analyses were then based on the 34-item RISK scale.

The school competence factor contains 12 items that assess the child's ability level and current school performance as well as the necessity for the teacher to modify instructional/curricular procedures to meet the child's educational needs. Seven items in the task orientation factor measure the child's task perseverance and freedom from distractibility. Five social items yield teachers' ratings of the child's social skills and comfort level with peers and in new situations. The behavior domain contains six items that address the child's antisocial behavior and resistance to the teacher. The final factor, motor, contains three items relating to fine and gross motor skills. (See Coleman & Dover, 1989, for a more technical explanation of instrument development and validation.)

Each student's most recent Stanford Achievement Test scores, expressed as normal curve equivalents (NCEs), were collected on the following subtests: Reading, Math, Language, and Composite. Otis-Lennon IQ scores, administered to all students during the 2nd grade, were also obtained. Students were considered to have received special education services if they were listed in the school district's special education database.

Teacher ratings were collected during the early spring of each academic year, thus allowing the kindergarten teacher one semester's experience with the children. The participating school district did not have access to the results of the inventory to prevent the ratings from contaminating future referral and placement decisions.

RESULTS

The five RISK factors were used in a stepwise multiple regression to view their independent contributions to the prediction of resource classroom placement. The canonical correlation between resource membership and the five predictor variables was .739, indicating that a linear combination of the five factors accounted for over 54% of the variance in the criterion variable. Moreover, each of the five factors yielded increments in the value of R-squared with probabilities less than .10. The school competence factor explained about 86% of the variance accounted for by the model; task orientation, motor, social, and behavior factors accounted for the remainder. The last four factors are listed in the order of their overall contribution to the model R-square.

Having determined that all five RISK factors were significantly related to school placement decisions, a series of discriminant function analyses were conducted to assess the predictive validity of the inventory. These analyses allowed for the construction of prediction-performance matrixes (Cronbach & Meehl, 1955) from which to judge the accuracy of RISK predictions to final student outcomes (placement in resource classes or regular classes). Establishing a stable discriminant function requires consensual validation; that is, the function must be calibrated on one sample and then fitted to a second sample to determine if it has generality. For this purpose,the four kindergarten cohorts were divided into two groups ( 1980 and 1982 samples and the 1981 and 1983 samples). The 1980/82 group was used as the calibration sample, and the 1981/83 sample became the target sample. Finally, the calibration discriminant function loadings were used, with the entire sample collapsed into a single group.

The adequacy of a discriminant function to correctly predict group membership must be judged using multiple criteria that reflected differing aspects of accuracy. Overall accuracy indicates the total number of children correctly placed in each group, whereas specificity assesses the accuracy of the function to select regular-classroom children to their group and sensitivity judges the ability of the function to select resource students to their group. The false-negative rate is an indication of the proportion of children who ultimately received resource placement who were not identified by the screening procedure; the false-positive rate represents the percentage of normal children who were incorrectly identified as potentially having disabilities. Finally, accuracy must be judged in terms of conditional probabilities. Because the base rate of membership in the two groups is so different ( 14.8% resource/85.2% regular), it is possible for the screening device to have a high overall accuracy while being very inaccurate in terms of predicting resource placement.

Table 1 includes the prediction/performance matrixes for the three samples; Table 2 presents summary information on the accuracy of the discriminant function for the calibration sample, the target sample, and the two samples combined. First, the various parameters of accuracy are comparable for the calibration and target samples, indicating that the discriminant function loadings had a high degree of generality across the two samples. In fact, the function constructed from the calibration sample was even more accurate when fitted to the target sample. For this reason, discussion of the various definitions of predictive validity will focus on Row 3 of Table 2, which summarizes data for the two samples combined.

Overall accuracy for the inventory was very high at 94.09%, with 1,194 out of 1,269 children accurately selected to their appropriate educational placement. This percentage is somewhat misleading, however, in that the accuracy rate for nonrisk students was 96.76%, whereas the comparable rate for risk students was 78.72%. The false-negative identification rate for children in resource programs was 21.28%, suggesting that the inventory overlooked about 1 in 5 children who were ultimately placed in resource classrooms. Of utmost importance was the very low false-positive identification rate of the inventory; only 35 of 1,081 normal children (3.23%) were mistakenly identified as likely to require future resource placement. Also of interest were the conditional probabilities of being in one or the other groups, given the screening measure assignment to that group. The probability of not having future educational needs sufficient to warrant resource placement, given the prediction by RISK of no problems, was 96.3 1%. This represents a substantial improvement over the chance rate of 85.2%.

Of even greater importance, the conditional probability of requiring resource classroom services, given the prediction of resource placement by RISK, was 80.87%, a dramatic increase over the chance rate of 14.8%. In other words, 4 out of 5 children judged by the screening measure to be at risk for future educational failure were ultimately placed in resource classrooms.

The contribution of specific items on the inventory to the predictive function was judged through structural coefficients. Structural coefficients represent the correlations between the original variables and the discriminant function score. The square of a structural coefficient indicates the proportion of variance of the variable with which it is associated that is accounted for by the given discriminant function. Pedhazur (1982) suggested that only structural coefficients above .30 be considered meaningful. On this basis, 28 of the 34 inventory items are considered to make meaningful contributions to the discriminant function.

The most important measurement factor in discriminating normal from resource students was the school competence cluster. The 12 items in this factor had an average intercorrelation with the discriminant function of .84, indicating that an average of 70% of the variance in each variable was related to the factor variance. Structural coefficients for two items, "Child has to be moved next to you because he/she needs extra help during activities," and "Teacher has to modify content or teaching approach to meet the needs of the child," were over .90 with values of R-squared above 80%. Items within this factor were of three types: (a) teacher estimates of the child's ability and motivation; (b) ratings of current academic skills, including difficulties with letters and numbers both in reading and writing; and (c) items that assess the child's schoolability, that is, how much the teacher has to modify instructional procedures and curricular materials to meet this child's needs, as well as the extent to which the child is capable of understanding directions, completing assignments in a timely fashion, and being successful when presented with challenging tasks.

All items on the task orientation factor, which assesses the child's task perseverance and freedom from distractibility, were significantly related to the discriminant function, with structural coefficients averaging .54. The highest coefficient (.68) was for the item "If child's activity is interrupted, he/she tries to go back to the activity"; the lowest (.34) was for the item "Child's seating location is changed to stop or prevent him/her from disrupting others." Though not as powerful a factor as school competence, the typical item still shared almost 30% variance with the discriminant function.

The remaining three factors, motor, social, and behavior, did not add substantially to the predictive power of RISK. While all items assessing fine and gross motor skills were related to the larger function, their structural coefficients were in the mid-40s. Half the items on the social factor returned coefficients above .30, with the group being led by the item "Child plunges into new activities without hesitation" (.51). Only two items from the six representing the behavior factor were substantially related to the larger function: "When playing with other children, this child argues with them" (.34) and "Child fails to take reprimands well" (.33).

Table 3 shows the profile of achievement and intellectual characteristics of the four groups of children yielded by the discriminant function. The groups listed are (a) accurate regular-class, (b) accurate resource class, (c) false-positive resource class, and (d) false-negative regular-class. This table also contains the average RISK factor scores on each domain for the four groups.

Table 3 shows that children accurately predicted to be in regular classrooms were of average ability, had achievement NCEs in the mid 50s, and an IQ of about 100. Z-scores on each RISK domain were slightly positive. For accurately predicted resource students, the profile was very different. Achievement NCEs averaged in the low 30s and the average IQ was about 82. All RISK domain scores were negative, led by the school competence domain, with an average value over 1.25 standard deviations below the mean. Regular-class students falsely predicted to require resource placement yielded achievement scores falling intermediate between the first two groups, with average achievement NCEs in the low 40s and an average IQ of 85. Their RISK scores were uniformly negative. Resource-class students falsely predicted to be normal had achievement percentiles in the high 30s and an average IQ score of 78. RISK factor scores were both positive and negative, depending on the particular domain.

Table 4 provides some indication of the relationship between RISK factor scores and the child's future cognitive and academic abilities. Table entries represent the correlations among the five RISK factors and Stanford Achievement Test NCE scores and Otis-Lennon IQ scores. The strongest relationship between RISK factors and achievement is for the school competence factor. The same statement holds true for IQ scores. About 25% of the variance in elementary grade achievement and intelligence can be accounted for by the RISK school competence factor. This covariance percentage fails to about 16% for the relationship between the RISK task orientation factor and subsequent achievement and intelligence. Though the remaining three RISK factors (social, behavior, and motor) were all significantly related to Stanford and Otis-Lennon scores, the correlations were smaller than for the first two factors.

DISCUSSION

It seems evident that kindergarten teachers' ratings of their students can be used to predict, with substantial accuracy, the likelihood that children will ultimately experience school difficulties sufficient to warrant placement in special education. Fletcher and Satz (1984), in comparing teacher ratings to test-based predictions of educational risk, found teachers to be better predictors of high risk (such as represented by special education placement), whereas tests were better at determining low risk (as in the case of low-achieving children not placed in special education).

Ratings on the RISK screening measure were clearly related to the child's future school performance. Analysis of item structural coefficients in the discriminant function revealed that most variables assessed by the measure contributed to predicting the child's future risk. In addition, predictive validity was not affected substantially by using a function calibrated on one sample to predict risk in a second sample. The weighting of the variables in the measure appears to have generality.

RISK predictions were differentially accurate, depending on the specific prediction being made. With regard to children without disabilities, the measure was accurate in judging them not likely to receive special education placement. Ninety-six out of every 100 regular-class children were accurately identified, a substantial improvement over the base or chance rate of 85.2%, which represents the proportion of these students on the entire sample. With regard to resource predictions, the accuracy rate was 80.87%, far above the chance rate of 14.2%.

Accuracy of special education predictions was hampered by both false-positive and false-negative errors. The number of false-negative nominations (40) represented a small proportion of the entire sample (3.15%) but over 20% of all children who were eventually placed in resource classrooms. Interestingly, over half these students were girls, although the entire special education sample was only about 30% female. RISK factor scores for this group differed from those of other special education students. Their scores on the school competence and task orientation factors were lower than for the nonrisk group, but higher than for other resource students. On the social and behavior factors, their scores were much higher than for other special education students and comparable to those of their regular-class peers. Although kindergarten teachers recognize that these children have some school problems, they view them as competent socially and not likely to engage in inappropriate or disruptive behavior. Clearly RISK misses these children in the screening process, but they are eventually identified through other resources within the school districts.

The 35 regular-class children falsely identified as being at risk for resource placement represented 2.75% of the entire sample. They included 26 boys and 9 girls. Their achievement and IQ scores were above those of resource students but far below scores of most children in regular classrooms. RISK factor scores for the school competence domain were higher than those of resource students but far lower than those of their regular-class peers. Group scores on the task orientation, social, and behavior dimensions were comparable to those of accurately identified resource students. Although children receiving false-positive identifications are not considered to have disabilities, it is clear that they are lower achievers in regular classroom programs and could have likely benefited from early intervention. In addition, a disproportionate number of these children were third graders, which raises the possibility that they may still be referred to special education in the future.

The predictive utility of the RISK screening inventory, in large part, stems from using teacher judgment as the basis for determining educational risk, as opposed to making such decisions based on children's perceptual and psycholinguistic abilities as measured by standardized tests. Such standardized measures assess a narrow range of children's skills that are thought to be prerequisite to academic achievement, whereas teacher judgments of children's risk have several components. Primary among these is the teacher's view of the child's ability in relation to other children in the same classroom. Second, the teacher judges the child's current school competence in terms of mastery of grade-appropriate skills and ability to function in the classroom (follow directions and work independently). Finally, teachers judge children in terms of the teacher investment required to teach the child (Feuerstein, 1980), essentially the extent to which curricula and instructional modifications are required to meet the child's educational needs. Each of these components of teacher's judgments is highly related to future educational success or failure.

The predictive validity of RISK is also likely enhanced in other ways as a result of using teacher ratings. Young children's performance on standardized tests is extremely variable (McCall & Applebaum, 1983) and susceptible to rapid change (Mercer et al., 1988). These factors limit the utility of static measures that represent a behavioral snapshot of the child at one point in time. The teacher ratings, however, are based on an extended history with the child. This history allows the evaluator to make judgments of students based on cumulative evidence, presumably a more stable indicator of the child's strengths and weaknesses.

The use of local norming may also increase the predictive validity of RISK. Variation in achievement across school districts is such that a particular score on a nationally standardized measure may represent an average skill level in one school district but low achievement in another district or perhaps even high achievement in a third district. RISK results are normalized within the classroom; the child's capabilities are judged in relation to others in the immediate school environment who will ultimately serve as the benchmark by which school performance will be assessed. This approach is sensitive to the differing characteristics of schools and school districts and provides a sensible alternative to interpreting performance based on nationally developed screening measures whose standardization sample may not adequately reflect the population of a specific school or district.

Screening measures such as RISK provide a useful means by which to identify children likely to experience future difficulties in school that require special educational services. These measures do not establish the basis of the child's particular needs, nor may they be used to diagnose children as having disabilities. Instead, they extract from the larger school population a subset of students likely to experience subsequent school failure. Far more complex assessments must then be undertaken to determine both the nature of the child's needs and the best approach to remediation. The advantage of a good screening measure is that it allows schools to focus their limited diagnostic resources on a relatively small group of children who are likely to benefit from the attention.

REFERENCES

Adelman, H.S., & Feshbach, S. (1971). Predicting reading failure: Beyond the readiness model. Exceptional Children, 39, 349-354.

Butler, S.R., Marsh, H.W., Sheppard, M.J., & Sheppard, J.L. (1985). Seven-year longitudinal study of the prediction of reading achievement. Journal of Education Psychology, 77(3), 349-361.

Coleman, J.M., & Dover, G, M. (1989). Rating Inventory for Screening Kindergartners. Austin, TX: ProEd.

Cronbach, L.J., & Meehl, P.E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.

deHirsch, K., Jansky, J., & Langford, W.S. (1966). Predicting reading failure. New York: Harper & Row.

Feshbach, S., Adelman, A., & Fuller, W.W. (1974). Early identification of children with high risk of reading failure. Journal of Learning Disabilities, 7(10),49-54.

Feuerstein, R. (1980). Instrumental enrichment: An intervention program for cognitive modifiability. Baltimore: University Park Press.

Fletcher, J.M. & Satz, P. (1984). Test-based versus teacher-based predictions of academic achievement: A three-year longitudinal follow-up. Journal of Pediatric Psychology, 9(2), 193-203.

Joiner, L.M. (1977). A technical analysis of the variation in screening instruments and programs in New York State. New York: City University of New York, Center for Advanced Study in Education. (ERIC Document Reproduction service #ED 154 596)

Lichtenstein, R. (1981). Comparative validity of two preschool screening tests: Correlational and classificational approaches. Journal of Learning Disabilities, 14(2), 68-72.

Mann, V.A. (1984), Longitudinal prediction and prevention of early reading difficulty. Annals of Dyslexia, 34, 117-136.

McCall, R. & Applebaum, M. (1983). Design and analysis in developmental psychology. In P. Mussen (Ed.), Handbook of child psychology (Vol. 4, pp. 415-476). New York: Wiley.

Meisels, S.J. (1985). Developmental screening in early childhood: A guide (rev. ed. ). Washington, DC: National Association for the Education of Young Children.

Meisels, S.J. (1987). Uses and abuses of developmental screening and school readiness testing. Young Children, 43(2), 4-6, 68-73.

Mercer, C.D., Algozzine, B., & Trifiletti, J. (1988). Early identification: An analysis of the research. Learning Disability Quarterly, 2, 12-24.

Michigan Department of Education. (1984). Superintendent's Study Group on Early Childhood Education. Lansing, MI: Author.

Muehl, P.E., & Forell, E.R. (1973). A follow-up study of disabled readers: Variables related to high school reading performance. Reading Research Quarterly, 9, 110-123.

Pedhazur, E.J. (1982). Multiple regression in behavioral research. New York: Holt, Rinehart and Winston.

Satz, P. & Fletcher, J.M. (1979). Early screening tests: Some uses and abuses. Journal of Learning Disabilities, 12(1), 65-69.

Satz, P. & Fletcher, J.M. (1988). Early identification of learning disabled children: An old problem revisited. Journal of Consulting and Clinical Psychology, 56(6), 824-829.

Vacc, N.A., Vacc, N.N., & Fogelman, M.S. (1987). Preschool screening: Using the DIAL as a predictor of first-grade performance. Journal of School Psychology, 25, 45-51.

ABOUT THE AUTHORS

J. MICHAEL COLEMAN (CEC TX Federation), Associate Professor, School of Human Development, The University of Texas at Dallas. G. MICHAEL DOVER, Director, Special Education Services, Lebanon Tenth District, and Wilson County, Tennessee Public Schools,

We wish to acknowledge the cooperation and support of students and staff from the Lebanon Tenth District and Wilson County, Tennessee, public schools in the completion of this project. George Hay, Sarah Morgan, Eunsook Kang, and Laura McHam provided invaluable assistance in data entry and analysis. Finally, our gratitude to Ann Minnett, Michael Pullis, Nanci Bray and Ernest Gotts, who reviewed earlier versions of this article.

Manuscript received July 1990; revision accepted February 1992.
 TABLE 1
 Prediction-Performance Matrixes for Calibration, Target, and
Combined Samples
 Actual Placement
 SPED REGULAR Total
 Calibration Sample
 SPED 68 22 90
 76.40% 4.31%
Predicted Placement
 REGULAR 21 489 510
 23.60% 95.69%
 Total 89 511 600
 Target Sample
 SPED 80 13 93
 80.80% 2.28%
Predicted Placement
 REGULAR 19 557 576
 19.20% 97.72%
 Total 99 570 669
 Entire Sample
 SPED 148 35 183
 78.72% 3.23%
Predicted Placement
 REGULAR 40 1,046 1,086
 21.28% 96.77%
 Total 188 1,081 1,269
 Note: SPED = special education student; REGULAR= students in
regular education classrooms.
 TABLE 2
 Various Parameters of Accuracy for Calibration, Target, and
Entire Sample Expressed as Percentages
 Calibration Target Combined
Parameter Sample Sample Sample
Overall accuracy 92.83 95.21 94.09
Specificity 95.69 97.72 96.76
Sensitivity 76.40 80.80 78.72
False positive 4.30 2.28 3.23
False negative 23.59 19.19 21.28
Conditional Probability SPED 75.55 86.02 80.87
Conditional Probability REGULAR 95.88 96.70 96.31
Note: SPED = special education students; REGULAR= students in
regular education classrooms.


[TABULAR DATA OMITTED]
COPYRIGHT 1993 Council for Exceptional Children
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Coleman, J. Michael; Dover, G. Michael
Publication:Exceptional Children
Date:Mar 1, 1993
Words:5555
Previous Article:Beyond Section 504: satisfaction and empowerment of students with disabilities in higher education.
Next Article:Quality of life as a conceptual framework for evaluating transition outcomes.
Topics:

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters