Printer Friendly

Prediction is not the only measure of a plan: a response to Marston.

Prediction Is Not the Only Measure of a Plan: A Response to Marston

Recently Marston (1988) attempted to demonstrate the predictive usefulness of two graphing methods, arithmetic and logarithmic, in helping practitioners choose the most technically adequate graph for measuring progress on individual educational plans (IEPs). Marston analyzed each charting procedure in terms of its predictive accuracy at various points in time. Specifically, Marston obtained various 1-minute (min) timed performance measures of reading and written expression (words read correctly, incorrectly, and words written correctly and incorrectly) of 83 low-achieving students in Grades 3-6. Data were collected weekly for 10 weeks; and, using computer simulation, after 7 weeks, regression equations were determined for each measure. Subsequently, the slope of each student's performance over the initial 7-week period was used to predict performance in terms of expected frequencies at Weeks 8, 9, and 10. Slopes were calculated using simulated charting on equal-interval and semilogarithmic graphs. Deviations between students' actual performance and predicted performance for each type of chart were determined.

Results indicated statistically significant differences favoring the equal-interval scale on a number of comparisons made at Weeks 8, 9, and 10. Marston found that the equal-interval chart produced more accurate predictions than did the logarithmic chart (specifically, the Standard Behavior Chart [Pennypacker, Koenig, & Lindsley, 1972]). Marston asserted that the likelihood of using repeated measurement for educational planning, for interventions, and for assessments by special educators may be related to teacher graph preference. He concluded that the research favoring equal-interval graphs provides an empirical basis for making the appropriate choice of graphs.

We believe that Marston has unfairly dismissed the usefulness of the Standard Behavior Chart (SBC) as a technically adequate measurement tool. This article provides discussion of conceptual and practical issues raised by Marston's study. These issues require attention when practitioners consider the adequacy of a particular graphing approach.


Importance of the Predictive Function

According to Marston, proponents of the SBC have maintained that a significant characteristic of the chart is the ability to better predict student performance. Regarding the trend line or slope used for prediction, White and Haring (1980) stated, "The real purpose of the line is to provide a basis for timely program change decisions" (p. 259). marston asserts that a significant characteristic of the chart, according to SBC proponents, is its better prediction of student performance. This assertion is an overstatement. Actually, proponents more often list as advantages the SBC's consistent and orderly display of data, the extended range of the chart's scale (permitting a wide range of behaviors to be recorded>, and the use of a real time (calendar day) X axis (an improvement over informal graphs that display data by sessions only) (White, 1986; White & Haring, 1980).

Marston appears to be concerned about the capability of various charts to produce the most exact prediction, and he feels that the acid test is a minimal degree of error in prediction. An important parameter of prediction, however, is the degree of confidence that one has about the prediction. Predicting future student performance from current performance data is impossible to do with great precision. Consequently, teachers trained to use the SBC attempt to establish range of days within which a predicted value can be expected to occur with some degree of confidence. This is done by drawing lines above and below the data display, parallel with the trend line that has already been drawn centrally through the data points. These parallel lines border the data at the lower and uppermost boundaries. When these lines are extended to the horizontal line representing the rate that one is attempting to predict, one can estimate the range of days in which the predicted value is expected to occur (Pennypacker, Koenig, & Lindsley, 1972). Using the data from Marston's example charts (his Figure 1), we used the procedure just described to project the range of days for arrival at the 10-week predicted performance (48 words for the equal-interval graph and 60 words for the semilogarithmic) for each chart. The range of days for the logarithmic chart was approximately one-third smaller than for the equal-interval scale (approximately 11 versus 16 weeks). This indicates that the SBC provides the more precise estimate. If one wishes to predict for the purpose of planning when to move to a new objective or when to change the difficulty of material, the graph that provides the narrowest range for earliest and latest arrival dates would be most useful.

Proponents of precision teaching use trend (celeration) lines to summarize a student's current performance, to decide whether progress is occurring toward goals set by the teacher. As Howell, Kaplan, and O'Connell (1979) pointed out, the prediction is used to determine where the student is going under current instructional and motivational conditions, not where the student should go. White and Haring's (1980) rationale for trend analysis (to see if progress is being made in the desired direction and at the desired rate, so that new goals and objectives may be produced in response to unsatisfactory student progress) is misinterpreted by Marston. The idea that slopes are used to determine where students should go is neither stated nor implied by White and Haring.

Monitoring Progress on IEPs

We find it curious that Marston focused his article on monitoring progress on the IEP, and subsequently examined a graphic measurement function (prediction) that, in practice, is not closely related to the IEP goal-monitoring process. Graphs inform teachers of student's progress (by showing trends over time for correct and incorrect performance within a portion of the curriculum). Typically, IEP goal and objective statements are rather broad, while charting is used for monitoring progress on more narrowly defined behaviors. Many small curricular steps are taken on the way to an annual goal, or even toward a shorter term IEP objective. Predicting a frequency weeks away, using many weeks of previously gathered data, is not a practice in which teachers typically engage. There is too great a likelihood that numerous curricular revisions (requiring new frequency aims) would be made over a period of even a month, let alone longer.

Eventually, progress or lack of it may indicate that the appropriateness of an IEP goal or objective be reevaluated, but the ability to predict a particular frequency is not part of the process. To monitor progress, specific aims are chosen a priori and desired lines of progress, projected from initial performance (the median value of a student's first 3 data points) to a desired performance (an aim chosen by the teacher, parent, or student), are drawn; and then progress is monitored relative to this line (White & Haring, 1980). The desired line is not created after teaching occurs. Marston's test of the accuracy of prediction between charts does not relate to efficacy in measuring IEP progress any more than accuracy in predicting where someone wishes to go could be related to knowing what brand of map will be used.


Statistical Significance Versus Practical


Even though we feel that Marston's comparison of either chart's predictive capability is not a critical dimension for judging the usefulness of the charts, let us assume for the moment that it is. An inspection of Marston's tables reveals that various comparisons differed sufficiently to attain statistical significance, given an n of 83. Specifically, 9 of the 24 comparisons were statistically significant. However, inspecting the mean deviations of both charts for each comparison (remembering that both charts generated errors in prediction) shows that three significant differences resulted from comparisons in which SBC deviations were within approximately 12% or less of the interval chart's deviations (words read correctly at Week 9; correct letter sequences at Week 9; and correct letter sequences at Week 10). Differences as small as these are, for practical purposes, unimportant.

In sum, 9 out of 24 prediction comparisons, or approximately 37%, were statistically significant. If 3 of the significant comparisons are considered practically unimportant and are disregarded, then 6 of 24, or 25% of all comparisons show significant differences in favor of the interval graph. Even if one strictly focuses on the 9 signficant differences that were obtained from the 24 comparisons, this hardly represents a clear empirical basis for discarding a measurement tool like the SBC, which has features unavailable on the equal-interval graph.

Standard Behavior Chart Features

Various timing and behavior counting parameters are employed when observing behavior in preparation for recording data on a graph. Typically, in academic recording, a 1-min time sample is obtained; and data are plotted representing the count perminute of correct or incorrect pupil responses. In some situations, however, 1-min timings are too short to obtain a sample of the behavior that is sufficiently representative of a student's skill in an area. For example, a teacher may be interested in a student's performance on all basic multiplication facts. More than 1 min of timing would be necessary to assess this skill for many children (and adults). On occasions where the duration of a timing lasts more than 1 min, the observer must divide minutes spent recording into the observed count of behavior to obtain data values in terms of count per minute (this is not terribly difficult, but it is extra task and often means keeping a hand calculator available).

When employing the equal-interval chart, before plotting data, minutes and behavior counts must be converted into frequency (number of behavioral events divided by time) by calculation, unless the same counting period is used each recording session. The SBC, because of the logarithmic scale, permits a simple calculation tool, the frequency finder, to be used to perform the division necessary for converting various counting periods and behavior counts into frequency (the frequency finder is also used for determining the value of slopes used to summarize charted data).

Use of Charting by Teachers

Marston gathered data on children once a week for 7 weeks. This procedure does not reflect preferred practice by teachers engaged in precision teaching or any similar endeavor. Frequent, direct, and continuous measurement has been advocated in precision teaching over the years (Lovitt, 1970); the notion of collecting only 7 data points, 1 week apart, is foreign to actual practice. Marston used a procedure that is not practiced in the field to perform a test of the two charts.

Teachers trained in precision teaching are encouraged to gather daily data and make decisions about curricular changes after 3 days of unsatisfactory student progress. Student performance is evaluated against a line of progress to a pre-established goal that existed before the measurement process began. Marston's procedure represents essentially a laboratory condition to examine a function of the charts that is remotely related to teaching or monitoring IEP progress.

Implications for Teacher Training

Marston pointed out the the most important implication of his research concerns teacher training. This argument was based on the notion that teachers resist using the SBC because of its complexity. However, research has indicated that the SBC is not overly difficult to use (White & Haring, 1980) and that training time and cost appear reasonable (White, 1986).

Marston also mentioned that educators resist using the SBC because of difficulty in explaining it to parents. Perhaps a teacher's training would affect resistance. Here, Marston may have touched on an important point. Teacher educators must train their pupils as thorougly as possible because teaching practices are coming under closer scrutiny than in the past. However, the SBC has been explained to children successfully (Bates & Bates, 1971); thus, explanation is not an insurmountable obstacle. In addition, other measurement concepts, such as standard scores, percentile ranks, and stanines, are difficult to explain to parents, but that does not make their use less valid or important.


Research on measurement tools is unquestionably valuable. Research must be thorough and fair to the intended use of the instrument. The best approach to evaluating any practice, especially in education, is to validate the practice based on its intended use. In evaluating graphing as a teaching aid, it is reasonable to evaluate what teachers do and what happens to students as a result of their activity. Until research focuses on the central question of concern to administrators, teachers, parents, and the public (i.e., Does the practice make a difference in what students do in terms of behvior that is socially valuable?), there is always the chance that conclusions may be based on asking the wrong questions.


Bates, S., & Bates, D. F. (1971). "... and a child shall lead them" Stephanie's chart story. TEACHING Exceptional Children, 3, 111-113.

Howell, K. W. Kaplan, J. S., & O'Connell. C. Y. (1979). Evaluating exceptional children: A task analysis approach. Columbus, OH: Charles E. Merrill.

Lovitt, T. (1970). Behavior modification: Where do we go from here? Exceptional Children, 37, 157-167.

Marston, D. (1988). Measuring progress on IEPs: A comparison of graphing approaches. Exceptional Children, 55, 38-44.

Pennypacker, H. S., Koenig, C. H., & Lindsley, O. R. (1972), Handbook of the standard behavior chart. Kansas City, KS: Precision Media.

White, O. R. (1986). Precision teaching--precision learning. Exceptional Chilren, 52, 522-534.

White, O. R., & Haring, N. G. (1980). Exceptional teaching (2nd ed.). Columbus, OH: Charles E. Merrill.

MARK A. KOORLAND (CEC Chapter #311) is a Professor in the Special Education Department at the Florida State University, Tallahassee. C. MICHAEL NELSON (CEC Chapter #83) is a Professor in the Special Education Department at the University of Kentucky, Lexington.
COPYRIGHT 1990 Council for Exceptional Children
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1990 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Point/Counterpoint; comment on D. Marston's article in 'Exceptional Children', vol. 55, 1988
Author:Koorland, Mark A.; Nelson, C. Michael
Publication:Exceptional Children
Date:Sep 1, 1990
Previous Article:Nonaversive treatment of high-rate disruption: child and provider effects.
Next Article:All charting of student performance is not precision teaching: a response to Koorland and Nelson.

Related Articles
Measuring progress on IEPs: a comparison of graphing approaches.
Response to Wang and Walberg.
Curriculum-based measurement and developmental reading models: opportunities for cross-validation.
All charting of student performance is not precision teaching: a response to Koorland and Nelson.
Curriculum-based assessment and direct instruction: critical reflections on fundamental assumptions.
Regular class or resource room for students with disabilities? A direct response to "Rich and Ross: A Mixed Message." (Point/Counterpoint)
Inclusion of Students with Learning Disabilities: An Examination of Data From Reports to Congress.
Marston throws out pounds 262m bid; Rivals dig in as shareholders told Wolverhampton offer 'has no merit'.
Barrels of bitterness as Marston condemns bid.
Don't want to be left alone.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters