Feedback types in programmed instruction: a systematic review.
It has been suggested that the use of programmed instruction is declining (McDonald, Yanchar, & Osguthorpe, 2005). However, it is more accurate to say that the characteristics of programmed instruction have changed over time. For example, early programmed instruction typically used small step sizes and construction responding, whereas modern programmed instruction typically uses larger step sizes and multiple-choice questions, perhaps because of the ease of programming. Despite these changes, computer-based instruction commonly used in both educational and business training settings still includes the primary components of programmed instruction.
Programmed instruction consistently includes three components, which compose a contingency of reinforcement (Skinner, 1968): (a) antecedent stimuli that occasion a response, (b) an opportunity for the learner to emit a response in the presence of the stimuli, and (c) an outcome, consisting of information about the correctness of the response. These contingencies of reinforcement, often called "frames," are presented in a logical sequence. However, the dimensions of the components vary among instructional designers.
The third component, information about the correctness of the response, is typically called feedback and is the focus of this review. Skinner (1968) emphasized the importance of feedback in instruction, describing its function as shaping and maintaining the learner's appropriate responding. Similarly, Holland (1960) conceptualized feedback as an immediate reinforcer for correct responses.
Considerable variation exists in the nature of feedback used in programmed instruction. It is important to determine whether feedback is beneficial in programmed instruction, and if so, what arrangement is best, so programmed instruction can be as effective as possible.
Several early studies suggest that feedback does not increase learning (Feldhusen & Birt, 1962; Moore & Smith, 1961, 1964; Rosenstock, Moore, & Smith, 1965; Wentling, 1973) or is even detrimental (Lublin, 1965). However, Kulhavy (1977) and Roper (1977) suggested that the majority of this research was conducted with instruction that allows learners to view feedback before completing a response, or overprompted instruction in which the learner can make the correct response by only superficially attending to the material. For example, Lublin (1965) and Rosenstock et al. (1965) presented the correct answer directly under the frames. When learners can peek at feedback, or instruction is overprompted, learners may be copying answers into the frames. Such instruction is said to consist of "copy frames." Research specifically investigating copy frames has found that feedback presented after the response produced significantly better results than feedback that was visible during response requirements (Anderson, Kulhavy, & Andre, 1971, Exp. II; Anderson, Kulhavy, & Andre, 1972). These studies anecdotally reported learners complaining about the temptation to peek at the highly visible feedback. Also, heavily prompted frames have been shown to be ineffective in promoting learning (Anderson & Faust, 1967).
Unfortunately, it is often difficult to determine whether a study used copy frames without looking at the specific instruction used. This is one of the major limitations of instructional design research articles and reviews. A research article is intended to provide enough information to be replicated. However, instructional design research very often does not give sufficient detail about the nature of the instruction to do so. Researchers often report the subject and general format of the instruction, but not much specific instructional content. Thus, it is often not possible to determine whether overprompted or otherwise low-quality instruction was used, unless the instruction is available. Furthermore, the efficacy of instruction cannot be determined by only viewing it; instruction must be tested and revised until specified performance standards are met (Markle, 1990). We recommend that research on programmed instruction include a detailed description of the instruction as well as data supporting its efficacy.
This paper reviews feedback research to answer the question "what impact do different types of feedback have on the effectiveness of programmed instruction?" In the current paper, effectiveness is construed as overall impact on criterion test performance. We also included a few studies, indicated in our review, that didn't include a criterion test, but demonstrated an impact on within-program performance. Data on time and approachability, measured by learner reports of instructional preferences, are also provided when available. The review concludes with a summary of findings and suggestions for application and future research.
This is a comprehensive review of existing empirical investigations of feedback types in programmed instruction. We searched the ERIC and PsyclNFO databases for any article pertaining to "feedback" and either "programmed" and "instruction" or "computer" and "instruction," because not all authors refer to their work as "programmed." Excluded from the review are unpublished dissertations, research that used instructional materials that are not sufficiently described so as to be clear that they contain all three primary components of programmed instruction, research that provided no demonstration of learning, and research conducted with instruction that is composed of copy frames. Unfortunately, as previously discussed, it is not always possible to identify the nature of the instruction used.
Types of Feedback
Feedback can be arranged in a variety of ways. Knowledge of results (KR) provides the learner with information about the correctness of his or her response; for example, "correct" or "wrong." The correct answer is not provided in the feedback. Knowledge of correct response (KCR) specifies the correct answer, and is often combined with KR, for example, "Incorrect. The correct answer is reinforcement." In elaboration feedback, additional information or an explanation of the correct answer is provided, in addition to the correct answer, for example, "Incorrect. The correct answer is reinforcement. The child's crying increased in frequency, so it was reinforcement." Elaboration feedback is also sometimes called extended feedback. Delayed feedback is delivered (a) following the response after a fixed passage of time, for example, 15 s after the response, or (b) after a number of responses, for example, after completing a set of instruction. Review feedback may require the learner to repeat each missed item a specific number of times or until correct, called answer until correct (AUC), or it could a re-presentation of all incorrectly answered questions at the end of an instructional sequence. The term feedback has also been used to describe potentially punishing consequences for incorrect responses or added incentives for correct responses.
Programmed instruction research typically uses criterion posttest scores as a dependent measure. Some research compares pretest to posttest performance gains. Groups are often matched on the basis of some other measure, such as performance on a standardized test. Though much less common, a few within-subject comparisons are also present in the literature. Posttests sometimes contain the exact questions used during instruction and sometimes contain related questions or modified questions. Many of the studies also used a delayed criterion test to assess retention.
In many cases, studies of feedback use only questions during the instructional program and involve a population of students who already have exposure to the content of the program. Studies of such drill-and-practice materials were excluded in favor of reviewing instruction that is designed to "stand-alone" and teach a topic to novices. Also, as previously noted, a few studies did not include a criterion measure and discuss results in terms of performance during the instructional program.
Review of Results
This portion of the review is organized by the different arrangements of feedback described earlier. Table 1 provides a chronological list of the reviewed articles and indicates the feedback conditions compared in each, which also corresponds to the review section(s) in which they are discussed.
Knowledge of Results
Of the various types of feedback arranged in instructional design, KR provides the least information to the learner. Following incorrect responses, learners are not provided with correct answers; they are simply told that the answer given is not correct. This may be useful if a learner has narrowed the response down to a select few, but would be of little benefit to a learner who is essentially guessing; which is presumably when errors are most likely.
In the four reviewed studies that included KR and a no-feedback control, KR was never shown to be more effective (Gilman, 1969; Moore & Smith, 1964; Roper, 1977; Rosa & Leow, 2004). Furthermore, both KCR (Moore & Smith, 1964; Roper, 1977) and elaboration feedback (Gilman, 1969; McKendree, 1990; Pridemore & Klein, 1991; Roberts & Park, 1984; Rosa & Leow, 2004; Salas & Dickinson, 1990; Waldrop, Justen, & Adams, 1986) have been shown to be superior to KR, and KR was never shown to be more effective than another type of feedback.
We do not recommend the use of KR alone, and future research should probably concentrate on other types of feedback. However, it is interesting to note that KR is often explicitly included in other types of feedback and is implicit in all types.
Knowledge of Correct Response
KCR provides significantly more information than KR, in that the learner is informed of the correct answer after incorrect responses, and KCR is used extensively in programmed instruction. In fact, all but one of the reviewed articles used some form of KCR, often combined with other procedures. Only studies in which KCR alone was compared to other conditions are reviewed in this section.
Of the reviewed studies 12 included a KCR group and a no feedback group. Of those, five showed KCR to be more effective than no feedback (Anderson et al., 1971, Exp. I & II; Clariana, Ross, & Morrison, 1991; Grant, McAvoy, & Keenan, 1982; Roper, 1977), five found no significant difference (Anderson et al., 1972; Gaynor, 1981; Gilman, 1969; Morrison, Ross, Gopalakrishnn, & Casey, 1995; Pysh, Blank, & Lambert, 1969), and one found no feedback to be superior to KCR (Pridemore & Klein, 1995). The other, Moore and Smith (1964), found that KCR via teaching machine produced better within-program performance than KR and no feedback with construction-response materials, but not with multiple-choice materials. KCR on paper produced better within-program responding than KR and no feedback with both types of response requirement, but this can be accounted for by 'peeking.' As noted earlier, Roper (1977) also found KCR to be more effective than KR. Similarly, though Gilman (1969) found no significant difference in posttest performance between KCR and KR, the KCR group required fewer iterations than the KR group (incorrectly answered items were repeated at the end of the program until all questions had been answered correctly).
Overall, KCR appears to be a reasonably effective approach. However, as later sections will discuss, other variations can provide additional benefit. Given the discrepancies in results across studies, further investigation into the conditions in which KCR is maximally effective is warranted. However, unless KCR can be shown to be at least as effective as other more time-intensive and/or design-labor intensive forms of feedback under those conditions, KCR cannot be recommended as an optimal strategy.
One situation in which KCR is particularly likely to be no more effective than KR (and no feedback) is when learners do not understand the material and therefore do not know why an answer is incorrect. However, feedback that tells the learner why an answer is incorrect or why the correct answer is correct may give the learner an opportunity to better understand and respond correctly in the future.
Of the three studies comparing elaboration feedback with no feedback, two found it to be more effective (Gilman, 1969; Grant et al., 1982) and one found no difference (Pridemore & Klein, 1995); though in the latter case participants in the no feedback condition expressed greater agreement with statements expressing a desire for more feedback. Elaboration feedback has been almost universally shown to be more effective than KR (Gilman, 1969; McKendree, 1990; Pridemore & Klein, 1991; Roberts & Park, 1984; Salas & Dickinson, 1990; Waldrop et al., 1986). However, Rosa and Leow (2004) found elaboration feedback that included AUC to be no better than KR without AUC. Similarly, though two studies reported no difference between elaboration feedback and KCR (Dempsey, Litchfield, & Driscoll, 1993; Merrill, 1987), the other five reviewed studies making the comparison showed elaboration feedback to be more effective (Collins, Carnine, & Gersten, 1987; Gilman, 1969; Grant et al., 1982; Kim & Phillips, 1991; Pridemore & Klein, 1995). Elaboration feedback has also been shown to be more effective than KR with AUC in two studies (Terrell, 1990; Waldrop et al., 1986); though Dempsey et al. (1993) showed no difference between elaboration feedback and KCR with AUC (or KCR without AUC, as already mentioned).
Overall, elaboration feedback appears to be highly effective. Further study is warranted to determine the relative merits of elaboration feedback with and without explicit KR and/or KCR. Gilman (1969) found that elaboration feedback with KR and KCR was superior to no feedback and KR, but elaboration feedback without KR or KCR was not significantly better (though the elaboration-only group did have a higher mean than the KR and no-feedback groups). In two of the other reviewed studies that included an elaboration-feedback group, the elaboration feedback appears to not have included KCR. McKendree (1990) found this type of elaboration feedback to be better than KR, but Merrill (1987) found it to be no better than KCR.
In addition to effectiveness, instructional designers are often interested in efficiency. It might be expected that the additional information provided by elaboration feedback would require more study time. Indeed, Pridemore and Klein (1991) found that elaboration feedback did result in higher feedback study times than KR, but total lesson times were not reported. However, four studies reported no difference in total lesson time (Collins et al., 1987 [KCR]; Grant et al., 1982 [KCR and no feedback]; Roberts & Park, 1984 [KR]; Salas & Dickinson, 1990 [KR]) and only one found elaboration feedback (along with two forms of review feedback) to require more total lesson time than KCR, with no difference in effectiveness (Dempsey et al., 1993).
Consistent with the notion that elaboration feedback is more effective because it provides more information, Nagata (1993) found, in the context of second-language instruction, that feedback that provided a detailed grammatical analysis of errors and relevant grammar rules was more effective than feedback that only included information regarding which key words were missing, used incorrectly, or not expected.
Also, Lalley (1998) found that a video segment with narrated elaboration feedback was more effective than the same information provided in text form in instruction on mammals, but not in similar instruction on reptiles.
Delayed feedback. Holland (1960) described feedback in programmed instruction as "immediate reinforcement for correct answers" (p. 276). Skinner (1968) also stressed the importance of immediate feedback. However, some researchers have examined the impact of delayed feedback on posttest performance. One reason delayed feedback might promote superior performance is that the learner attends to the instruction or question during the delay. However, some feedback is delayed until the end of a piece of instruction. Rankin and Trepper (1978) noted that in these cases, the stimulus is presented a second time, as compared to immediate feedback in which only the correct response is presented. Thus, they hypothesized that delayed feedback may be more effective because the learner receives an additional exposure to the stimulus.
Five studies looked at the effects of delayed feedback. Anderson et al. (1971, Exp. I) found that KCR with a 15-s, pre-feedback delay following incorrect answers was superior to no feedback, but not different from immediate KCR, KCR with AUC, and a condition in which participants could elect after each response whether or not to see KCR. Sullivan, Schutz, and Baker (1971) found immediate KR to be better than KR at the beginning of the subsequent session. Gaynor (1981) found no differences between immediate KCR, 30-s delayed KCR, KCR at the end of the lesson, and no feedback. Clariana et al. (1991) found immediate KCR, KR with AUC, and immediate KR with all questions repeated with KCR at the end of the lesson to all be better than no feedback (on both posttest and retention test), but not different from each other.
The preponderance of evidence supports delayed KCR as effective, but no more effective than immediate KCR. Apparently, the immediacy of KCR feedback is not critical. However, none of these studies involved delayed elaboration feedback. Given that elaboration feedback has generally been shown to be a more effective approach than KCR, it would be interesting to see how delay impacts its effectiveness.
Post-feedback delay. Anderson et al. (1971, Exp, II) found that 15-s post-feedback delay with the frame and KCR in view after incorrect answers was not significantly better than no feedback. Similarly, Crosbie and Kelly (1994, Exp. I) found that a noncontingent 10-s delay with a blank screen following KCR produced better within-program responding than a contingent 10-s delay with a blank screen following incorrect responses or KCR with no delay, when all three conditions included repeating incorrect frames at the end of each set of instruction until all frames were answered correctly. In their follow-up experiment (1994, Exp. II), they showed that a 10-s delay after KCR with the frame and KCR visible produced better within-program responding than a 10-s delay with a blank screen or KCR with no delay using the same review technique. Finally, Kelly and Crosbie (1997) replicated the benefit, on within-program responding and posttest performance, of the visible frame and KCR during a 10-s delay with and without review.
Based on these studies, it seems that an imposed delay is beneficial in programmed instruction and the benefit comes from additional exposure to the instructional frame and feedback, rather than a punishment effect; which is why we discussed the findings here and not in the later section on consequences. Further study is warranted to determine if such imposed delays would also be effective with elaboration feedback, which would provide more information during the delay period. Also, though participants in the studies by Crosbie and Kelly (1994, Exp. I and II) and Kelly and Crosbie (1997) rated delays similarly to or better than no delay, it is worth investigating shorter delays to potentially increase the efficiency of instruction using the technique.
There are a variety of ways to arrange review feedback. One way, called answer-until-correct feedback (AUC), is to require the learner to respond until they respond correctly. A positive aspect of this type of feedback is that the last answer the learner makes is the correct one (Clariana et al., 1991) and the ability to continue through the instruction may function as reinforcement. Also, the learner may engage in more effortful thinking before the first response because it avoids the item being re-presented and delaying progress through the program; whereas, if learners receive the KCR after one try, they may guess because it increases the rate at which they can proceed. This latter effect should hold even if learners are only required to respond a set number of times, even if they haven't yet responded correctly, which is often how this type of feedback is arranged. However, this approach may also be aversive to the learner (Dick & Latta, 1970), making it a somewhat less desirable approach, even if effective. Review feedback has also been arranged by re-presenting incorrectly answered frames at the end of the instruction. Several studies (Albertson, 1986; Collins et al., 1987; Crosbie & Kelly, 1994, Exp. I & II; Gilman, 1969; Nagata, 1993) included review feedback in all groups, thus they are not included in this section because no comparison can be made.
In the four reviewed studies that involved a comparison of AUC with a no-feedback condition, AUC was found to be superior in two (Anderson et al., 1971, Exp. I; Clariana et al., 1991) and no different in the other two (Anderson et al., 1971, Exp. II; Morrison et al., 1995). It should be noted, however, that in Anderson et al. (1971, Exp. II), KCR was provided before AUC, so the subsequent response required little more than copying an answer, and in Morrison et al. (1995), all groups had the opportunity to review instructional frames, which essentially provides elaboration feedback. When compared to other types of feedback, AUC was found to be no better than KR (Waldrop et al., 1986), no better than (Anderson et al., 1971, Exp. I & II; Clariana et al., 1991; Morrison et al., 1995) or inferior to (Clariana, 1990) KCR and no better than (Dempsey et al., 1993) or inferior to (Terrell, 1990; Waldrop et al., 1986) elaboration feedback.
Repeating incorrectly answered frames at the end of an instructional sequence (with immediate KR) was found to have no advantage over KCR in four studies (Anderson et al., 1971, Exp. II; Clariana et al., 1991; Kelly & Crosbie, 1997; Morrison et al., 1995). Clariana et al. (1991) and Morrison et al. (1995) found this form of review to be superior to no feedback; however, Anderson et al. (1971, Exp. II) found no benefit.
Based on this research, it appears that review feedback, regardless of type, may be marginally better than no feedback, has no benefit over KR or KCR, and is generally inferior to elaboration feedback.
Munson and Crosbie (1998) showed that punishment (5 cents earned for correct, 5 cents lost for incorrect) was better than baseline (5 cents earned for each response). This may be a punishment effect, but it could as easily be a reinforcement effect, given that the 'punishment' condition also involved a reinforcement contingency that was not present in the baseline condition. Sullivan et al. (1971) found no differences between a per lesson mastery test contingency (release from 1/4 drill period for each of 12 tests, if a score of 70% or higher was obtained) and criterion test contingency (release from 3 drill periods if a score of 70% or higher was obtained on the criterion test). Moore and Smith (1964) found that KR plus 1cents per correct response produced better within-session performance than KR, KCR via teaching machine, and no feedback with multiple-choice materials, but not with construction-response materials. Pysh et al. (1969) found no advantage of KCR with a self-tallied point per correct response over KCR or no feedback, but it was not clear what, if any, value the points had. Thorkildsen and Reid (1989) found no difference between conditions in which 2nd to 4th-grade students were presented with "correct" and either a graphic (e.g., smiley face) or a video segment following correct responding and a condition in which they only received "correct," when incorrect responses were followed by a short buzz in all conditions.
The results for punishment are somewhat promising and could be explored further; however, the practical utility of response cost during instruction in actual learning situations is questionable. The research on added reinforcers for correct responses is somewhat limited and generally involves stimuli of questionable reinforcing value. In any case, this practice may not be practical in most learning situations, so research efforts should probably be concentrated on more realistic feedback variations.
Discussion and Future Research Directions
This review of feedback in programmed instruction suggests that feedback is superior to no feedback in promoting improved performance on a criterion test. KR is not recommended. KCR is somewhat effective, but further research is needed to determine the situations in which it is useful. Delayed feedback and review feedback seem to be no better than immediate feedback. Elaboration feedback and postfeedback delays with the question and feedback in view appear to be the most effective forms and are worth exploring further.
Given the lack of effectiveness of KR feedback, it seems unlikely that feedback functions primarily as reinforcement. Taken as a whole, this body of research suggests that the primary function of feedback is to provide additional instruction when pre-question information was insufficient.
Consistent with this idea, as noted earlier, Grant et al. (1982) found that elaboration feedback was more effective than KCR feedback. More interestingly, they also showed that presenting the same information before the questions was more effective than using it as feedback and that the feedback groups required more time to complete the instruction.
Kulhavy, Yekovich, and Dyer (1976, 1979) showed that learners' confidence in their answers impacted the results of feedback. When learners reported greater confidence in their responses they spent more time studying KCR feedback after incorrect responses and were more likely to respond correctly to the same item on the posttest than when they reported lower confidence. It would be interesting to see if the same effect on posttest responding would be found if study time were equalized by imposing a posttfeedback delay.
Another avenue worth exploring is similarity of posttest and instructional questions. Clariana et al. (1991) found that the benefits of feedback (KCR, delayed KCR, and AUC) decreased as posttest and instructional questions decreased in similarity. Morrison et al. (1995) reported similar results for KCR and review feedback. Given that both studies used KCR and most feedback research uses posttest items that are the same as or similar to items used during the instruction, it remains to be seen if elaboration feedback would have a substantial effect on correctly answering transformed, paraphrased, or inferential questions.
The majority of the research was conducted with college students. Research with learners from other populations would be useful. In addition, most instructional design research involves exposing participants to just a few pieces of instruction over a short period of time. It would be interesting to see the results of different arrangements of instruction over an extended period of time in an applied learning setting (e.g., a full semester or throughout a training program in a business setting).
In addition, the relationship between performance during instruction and performance on criterion measures needs to be explored. It is unclear whether performance during instruction can be generalized to performance on criterion tests.
Personalization of feedback could also be explored further. Albertson (1986) found that KCR feedback that included the learner's name was more effective than nonpersonalized KCR feedback, when both conditions included AUC. This was the only study we found to investigate personalized feedback, so no general conclusion can be drawn, but given the relative ease of personalizing feedback in current software packages, it warrants additional study.
Finally, more measures of the approachability of instruction are suggested. Only a few researchers gave data on learners' reports of preferences of different feedback arrangements. The most effective instruction will not be useful if potential learners find it aversive and do not use it.
ALBERTSON, L. M. (1986). Personalized feedback and cognitive achievement in computer-assisted instruction. Journal of Instructional Psychology, 13(2), 55-57.
ANDERSON, R. C., & FAUST, G. W. (1967). The effects of strong formal prompts in programmed instruction. American Educational Research Journal, 4, 345-352.
ANDERSON, R. C., KULHAVY, R. W., & ANDRE, T (1971). Feedback procedures in programmed instruction. Journal of Educational Psychology, 62(2), 148-156.
ANDERSON, R. C., KULHAVY, R. W., & ANDRE, T (1972). Conditions under which feedback facilitates learning from programmed lessons. Journal of Educational Psychology, 63(3), 186-188.
BHUSHAN, A., & SHARMA, R. D. (1975). Effect of three instructional strategies on the performance of B.Ed. student-teachers of different intelligence levels. Indian Educational Review, 10(2), 24-29.
CHATTERJEE, S., & BASU, M. K. (1987). Effectiveness of a paradigm of programmed instruction. Indian Psychology Review, 32(3), 10-14.
CLARIANA, R. B. (1990). A comparison of answer until correct feedback and knowledge of correct response feedback under two conditions of contextualization. Journal of Computer-Based Instruction, 17(4), 125-129.
CLARIANA, R. B., ROSS, S. M., & MORRISON, G. R. (1991). The effects of different feedback strategies using computer-administered multiple-choice questions as instruction. Educational Technology, Research, and Development, 39(2), 5-17.
COLLINS, M., CARNINE, D., & GERSTEN, R. (1987). Elaborated corrective feedback and the acquisition of reasoning skills: A study of computer-assisted instruction. Exceptional Children, 54(3), 254-262.
CROSBIE, J., & KELLY, G. (1994). Effects of imposed post feedback delays in programmed instruction. Journal of Applied Behavior Analysis, 27(3), 483-491.
DANIEL, W. J., & MURDOCH, P. (1968). Effectiveness of learning from a programmed text covering the same material. Journal of Educational Psychology, 59(6), 425-431.
DEMPSEY, J. V., LITCHFIELD, B. C., & DRISCOLL, M. P. (1993). Feedback, retention, discrimination error, and feedback study time. Journal of Research on Computing in Education, 25(3), 303-326.
DICK, W., & LATTA, R. (1970). Comparative effects of ability and presentation mode in computer-assisted instruction and programmed instruction. Audio-Visual Communication Review, 18(3), 34-45.
FELDHUSEN, J. F., & BIRT, A. (1962). A study of nine methods of presentation of programmed learning material. Journal of Educational Research, 55, 461-466.
FERNALD, P. S., & JORDAN, E. A. (1991). Programmed instruction versus standard text in introductory psychology. Teaching of Psychology, 18(4), 205-211.
GAYNOR, P. (1981). Effects of feedback delay on retention of computer-based instructional material. Journal of Computer-Based Instruction, 8(2), 28-34.
GILMAN, D. A. (1969). Comparison of several feedback methods for correcting errors by computer-assisted instruction. Journal of Educational Psychology, 60(6), 503-508.
GRANT, L., MCAVOY, R., & KEENAN, J. B. (1982). Prompting and feedback variables in concept programming. Teaching of Psychology, 9(3), 173-177.
HARTLEY, S. S. (1978). Meta-analysis of the effects of individually-paced instruction in mathematics. Dissertations Abstracts International, 38, 4003.
HOLLAND, J. G. (1960). Teaching machines: An application of principles from the laboratory. Journal of the Experimental Analysis of Behavior, 3, 275-287.
KELLY, G., & CROSBIE, J. (1997). Immediate and delayed effects of imposed postfeedback delays in computerized programmed instruction. The Psychological Record, 47, 687-698.
KIM, J. L., & PHILLIPS, T. L. (1991). The effectiveness of two forms of corrective feedback in diabetes education. Journal of Computer-Based Instruction, 18(1), 14-18.
KULHAVY, R. W. (1977). Feedback in written instruction. Review of Educational Research, 47(2), 211-232.
KULHAVY, R. W., YEKOVICH, F. R., & DYER, J. W. (1976). Feedback and response confidence. Journal of Educational Psychology, 68(5), 522-528.
KULHAVY, R. W., YEKOVICH, F. R., & DYER, J. W. (1979). Feedback and content review in programmed instruction. Contemporary Educational Psychology, 4, 91-98.
KULIK, J. A., COHEN, P. A., & EBELING, B. J. (1980). Effectiveness of programmed instruction in higher education: A meta-analysis of findings. Educational Evaluation and Policy Analysis, 2, 51-64.
KULIK, C. C., SCHWALB, B. J., & KULIK, J. A. (1982). Programmed instruction in secondary education: A meta-analysis of findings. Journal of Educational Research, 75(3), 133-138.
LALLEY, J. P. (1998). Comparison of text and video as forms of feedback during computer assisted learning. Journal of Educational Computing Research, 18(4), 323-338.
LUBLIN, S. C. (1965). Reinforcement schedules, scholastic aptitude, autonomy need and achievement in a programmed instruction course. Journal of Educational Psychology, 56, 295-512.
MARKLE, S. M. (1990). Designs for instructional designers. Champaign, IL: Stipes Publishing Co.
MCDONALD, J. K., YANCHAR, S. C., & OSGUTHORPE, R. T. (2005). Learning from programmed instruction: Examining implications for modern instruction technology. Educational Technology Research and Development, 53(2), 84-98.
MCKENDREE, J. (1990). Effective feedback content for tutoring complex skills. Human-Computer Interaction, 5, 381-413.
MERRILL, J. (1987). Levels of questioning and forms of feedback: Instructional factors in courseware design. Journal of Computer-Based Instruction, 14(1), 18-22.
MOORE, J. W., & SMITH, W. I. (1961). Knowledge of results in self-teaching spelling. Psychological Reports, 9, 717-726.
MOORE, J. W., & SMITH, W. I. (1964). Role of knowledge of results in programmed instruction. Psychological Reports, 14, 407-423.
MORRISON, G. R., ROSS, S. M., GOPALAKRISHNAN, M., & CASEY, J. (1995). The effects of feedback and incentives on achievement in computer-based instruction. Comtemporary Educational Psychology, 20, 32-50.
MUNSON, K. J., & CROSBIE, J. (1998). Effects of response cost in computerized programmed instruction. The Psychological Record, 48, 233-250.
NAGATA, N. (1993). Intelligent computer feedback for second language instruction. The Modern Language Journal, 77(3), 330-339.
PRIDEMORE, D. R., & KLEIN, J. D. (1991). Control of feedback in computer-assisted instruction. Educational Technology Research and Development, 39(4), 27-32.
PRIDEMORE, D. R., & KLEIN, J. D. (1995). Control of practice and level of feedback in computer-based instruction. Contemporary Educational Psychology, 20, 444-450.
PYSH, F, BLANK, S. S., & LAMBERT, R. A. (1969). The effects of step size, response mode and knowledge of results upon achievement in programmed instruction. The Canadian Psychologist, 10(1), 49-64.
RANKIN, R. J., & TREPPER, T. (1978). Retention and delay of feedback in a computer-assisted instructional task. Journal of Experimental Education, 46(4), 67-70.
ROBERTS, F. C., & PARK, O. (1984). Feedback strategies and cognitive style in computer-based instruction. Journal of Instructional Technology, 11(2), 63-74.
ROPER, W. J. (1977). Feedback in computer assisted instruction. Programmed Learning and Educational Technology, 14(1), 43-49.
ROSA, E. M., & LEOW, R. P. (2004). Computerized task-based exposure, explicitness, type of feedback, and Spanish L2 development. The Modern Language Journal, 88(2), 192-216.
ROSENSTOCK, E. H., MOORE, W. J., & SMITH, W. I. (1965). Effects of several schedules of knowledge on results on mathematics achievement. Psychological Reports, 17, 535-541.
SALAS, S. B., & DICKINSON, D. J. (1990). The effect of feedback and three different types of corrections on student learning. Journal of Human Behavior and Learning, 7(2), 13-19.
SKINNER, B. F. (1968). The technology of teaching. New York: Appleton-Century-Crofts.
SULLIVAN, H. J., SCHUTZ, R. E., & BAKER, R. L. (1971). Effects of systematic variations in reinforcement contingencies on learner performance. American Educational Research Journal, 8, 135-142.
TERRELL, D. J. (1990). A comparison of two procedures for remediating errors during computer-based instruction. Journal of Computer-Based Instruction, 17(3), 91-96.
THORKILDSON, R. J., & REID, R. (1989). An investigation of the reinforcing effects of feedback on computer-assisted instruction. Journal of Special Education Technology, 9(3), 125-135.
WALDROR, P. B., JUSTEN, J. E., III, & ADAMS, T. M., II. (1986). A comparison of three types of feedback. Educational Technology, 26, 43-45.
WENTLING, T. L. (1973). Mastery versus nonmastery instruction with varying test item feedback treatments. Journal of Educational Psychology, 65, 50-58.
MATTHEW L. MILLER
Pfizer Global Manufacturing
Correspondence concerning this article should be addressed to Wendy Jaehnig, 8613 Dolphin St., Portage, Michigan, 49024. (E-mail: firstname.lastname@example.org).
Table 1 Summary of Feedback Types in Literature Review Feedback Types Authors No FB KR KCR Elab. Delay Review Cons. Other Moore & Smith, 1964 X X X X Gilman, 1969 X X X X Pysh et al., 1969 X X X Anderson et al., X X X X X 1971, Exp. I Anderson et al., X X X X 1971, Exp. II Sullivan et al., X X X 1971 Anderson et al., X X 1972 Roper, 1977 X X X Gaynor, 1981 X X X Grant et al., 1982 X X X Roberts & Park, 1984 X X Albertson, 1986 X Waldrop et al., 1986 X X X Collins et al., 1987 X X Merrill, 1987 X X Thorkildsen & Reid, X 1989 Clariana, 1990 X X McKendree, 1990 X X Salas & Dickinson, X X 1990 Terrell, 1990 X X Clariana et al., X X X X 1991 Kim & Phillips, 1991 X X Pridemore & Klein, X X 1991 Dempsey et al., 1993 X X X Nagata, 1993 X Crosbie & Kelly, X 1994, Exp. I Crosbie & Kelly, X 1994, Exp. II Morrison et al., X X X 1995 Pridemore & Klein, X X X 1995 Kelly & Crosbie, X X X 1997 Lalley, 1998 X Munson & Crosbie, X 1998 Rosa & Leow, 2004 X X X
|Printer friendly Cite/link Email Feedback|
|Author:||Jaehnig, Wendy; Miller, Matthew L.|
|Publication:||The Psychological Record|
|Date:||Mar 22, 2007|
|Previous Article:||Teaching college level content and reading comprehension skills simultaneously via an artificially intelligent adaptive computerized instructional...|
|Next Article:||Satisfaction with life and hope: a look at age and marital status.|