Printer Friendly

Enhancing learning in the introductory course.

General education and introductory courses expose students to the foundations of an academic discipline. The broadening of the content areas that define academic disciplines, and the integration of multimedia resources into the teaching process, markedly increase the breadth of and depth at which concepts can be presented to enhance traditional lecture materials. Indeed, these changes have prompted many departments to develop a two-semester introductory sequence to better prepare students and to expose them to the content and theory that define an academic major. At many institutions, enrolling in next-level courses and matriculating into an academic major require completion of the introductory course or courses, often with a minimum grade requirement. This gateway reflects not only contemporary enrollment management practices but also traditional pedagogy that learning within an academic major is sequential and that foundational concepts presented in lower level courses should be mastered prior to enrolling in upper-level courses. Earlier learning, then, is assumed to be accessible for assimilation into the increasingly theoretical frameworks presented in upper-level courses.

Initial investigations of learning in introductory baccalaureate courses revealed high levels of retention (e.g., Cederstorm, 1930; Eurich, 1934; Greene, 1931; Johnson, 1930; Spitzer, 1939; Tyler, 1933; Wert, 1937). In many of these studies, though, curricular factors without contemporary parallel (e.g., 12 hours of weekly class and laboratory time per course) were often examined (e.g., Cederstorm) and the assessment of retention beyond the end-of-the-semester examination was rare. More recent studies examined retention when specialized systems of instruction, specifically Keller's (1968) Personalized System of Instruction (PSI) and Bloom's (1968) Learning-for-Mastery (LFM), were compared with traditional lecture methods (e.g., Goldwater & Acker, 1975). PSI and LFM both emphasize mastery learning but differ significantly owing to the lecture-based structure of LFM and the tutor-based structure of PSI. These structural differences result in LFM-instructed students receiving fewer opportunities to take and to retake as many tests as PSI-instructed students, and thus the reported differences between these two systems are potentially confounded by the number-of-trials effect typically observed in laboratory studies. Despite these differences and the potential for PSI-instructed students to complete more trials, there is little doubt that specialized systems of instruction create unique opportunities for students to evaluate and increase their level of comprehension through repeated testing (e.g., Hursh, 1976; Kulik, Jaska, & Kulik, 1978; Semb & Ellis, 1992; Semb, Ellis, & Araujo, 1993).

Contemporary changes in higher education, such as the rise of distance learning programs, the growing use of intranet and Internet channels to deliver lecture materials, and the increasing reliance on part-time instructors, have made the implementation of specialized systems of instruction in contemporary introductory courses exceedingly problematic. Despite the wealth and popularity of introductory courses, there is a paucity of evidence about the retention of learning in introductory courses, and this concern is magnified by the modal test question and answer recording format used in introductory courses: the multiple-choice item and Scantron form, respectively. Test item developers assume that adequately learned facts are not subject to interference from the confluence of correct and incorrect response options and, thus, misinformation serves as the context within which correct responses must be discriminated when test materials are presented in a multiple-choice format. This assumption is surprising in light of the repeated demonstration that exposure to misinformation within test items causes participants to perceive or to remember misinformation as being correct. This phenomenon represents the memorial consequences of exposure to misinformation that have been described as the Negative Suggestion Effect, a result that can be demonstrated even when misinformation is specifically identified as being incorrect (e.g., Brown, Schilling, & Hockensmith, 1999). Indeed, the Negative Suggestion Effect is remarkably similar to the outcomes of seminal studies of eyewitness testimony (see Loftus & Hoffman, 1989). The consistency of these outcomes may help to explain many an instructor's bemoaning of students' failure to perform satisfactorily on end-of-semester tests in the introductory course and their failure to recognize core introductory-level concepts when they are presented in follow-up courses.

During the past 10 years, a series of programmatic studies on the relative benefits of delayed and immediate feedback, under classroom and laboratory conditions, for student learning have been undertaken (see Brosvic, Epstein, Dihoff, & Cook, 2006c; Dihoff, Brosvic, & Epstein, 2003; Dihoff, Brosvic, & Epstein, 2004; Epstein et al., 2002; Epstein et al., 2003). Proponents of immediate feedback recommend the correction of an incorrect response and the acquisition of the correct response before exiting a test problem or test session (Brosvic, Epstein, Dihoff, & Cook, 2006a, 2006b). In comparison, proponents of delayed feedback recommend the imposition of a delay of 24 to 48 hours to facilitate the forgetting of errant responses and the acquisition of correct responses in the absence of the interference that immediate feedback on an item-by-item basis is postulated to generate (Kulhavy & Stock, 1989). The provision of immediate feedback has consistently resulted in greater facilitation of retention than that observed for delayed feedback, but it is difficult to compare these results with the larger body of literature because of noteworthy differences in the procedures used to define immediate and delayed feedback. For example, definitions of immediate feedback can range from the instantaneous presentation of the correct response (e.g., Epstein et al., 2003) to the presentation of correct responses at the next weekly meeting of a class (Robin, 1978). The definition of delayed feedback, similarly, can range from a review of correct responses at either the end of a test or after a 24-hr delay (Dihoff, Brosvic, & Epstein, 2003; Dihoff, Brosvic, & Epstein, 2004) to delays of 7 or more days (Bruning, Schraw, & Ronning, 1999; Robin, 1978). To the authors' knowledge there are no published reports on the timeliness of returning examinations, but it is likely that even the median of the intervals presented above may represent considerably more prompt grading than many students are accustomed to.

The tool developed and validated by Epstein and his colleagues is the Immediate Feedback Assessment Technique, or IF AT, which embodies the theoretical and practical foundations of the teaching-testing machines, described by Skinner (1958), and transforms the passive receiver of information into the active demonstrator of skills and knowledge. The IF AT form (E3 Corporation; see Figure 1) is a multiple-choice answer sheet with rows of rectangular answer spaces (e.g., A, B, C, and D) that is nearly identical in layout to the ubiquitous, machine-scored answer sheet available from Scantron Corporation. Participants scrape off an opaque, waxy coating that covers an answer space on the IF AT form to record their answer. If a symbol (e.g., a star) is printed beneath the covering, the student receives instant feedback that a correct choice was made; the absence of a symbol provides instantaneous feedback that an incorrect choice was made. However, rather than simply exiting the question, the student reviews the remaining response options, continues to respond until the correct answer is discovered (a self-correction procedure), and exits each question with the correct answer.

[FIGURE 1 OMITTED]

The primary focus of the present study was to examine how the timing of feedback affects the acquisition and retention of student learning in an introductory course over intervals during and after the academic semester. Unlike prior studies in which specialized systems of instruction have been examined, the present study required changes in neither pedagogy nor assessment, although the immediate feedback procedure required less instructor time than either of the two delayed feedback conditions. The secondary focus was to examine the potential influences of selected demographic variables previously reported to influence performance in introductory courses, e.g., the completion of high school courses in science and psychology. It was predicted that retention would be positively affected by the provision of feedback, with the largest benefits observed when immediate feedback was provided. If the retention of classroom learning is differentially affected by the timing of feedback, then the use of immediate feedback procedures may make introductory course materials more accessible during remaining components of the academic semester as well as in the next-level courses in the major.

Method

Participants

Complete data on the five course examinations and the final examination were collected from 611 undergraduates, 263 male and 348 female; these participants are hereafter referred to as the short-term retention sample. From these participants, complete data on the postsemester 3- through 12-month assessments were collected from a subsample of the original 611: 467 undergraduates, 196 male and 271 female; these participants are hereafter referred to as the long-term sample. The modal participant in the short- and long-term samples was a Caucasian female who was in her first year of college and majoring in the liberal arts and sciences. The textbook and lecture slides were common to all sections.

Design and Procedure

Five classroom examinations, each with 100 multiple-choice items, were completed during the semester. The final examination consisted of 100 multiple-choice items, with 50 new items and 10 items randomly selected from each classroom examination. Examination items were drawn from the test bank supplied with the text. A minimum of 65 male and 85 female participants were randomly assigned to each of the three feedback groups described below, and a like number was randomly assigned to the control group. In the End-of-Test Feedback group, participants recorded their answers on Scantron forms and, on completing each examination, reviewed their answer sheet, test items, and the correct solutions for 30 minutes. In the Delayed Feedback group, participants recorded answers on Scantron forms and, 24 hr. after completing each examination, reviewed their answer sheets, test items, and the correct responses for 30 minutes. In the Immediate Feedback group, participants recorded answers on an IF AT form by scratching off an opaque, waxy coating. A correct response was affirmed by a symbol; the absence of a symbol indicated an incorrect response, after which participants were permitted to continue to select from among the remaining answers until the correct solution was discovered. Participants in the Control condition recorded their responses on Scantron forms and, at the next class meeting, received their machine-scored Scantron forms. All participants completed the final using Scantron forms. Although the IF AT method allows the assignment of partial credit (i.e., correct responding on the first attempt is assigned 100% of item credit, whereas correct responding on the second, third, or fourth attempt may be assigned reduced percentages according to instructor discretion), this procedure was not used, and the results described below were based on the accuracy of initial responses.

Participants were contacted and asked to complete the long-term retention tests 3, 6, 9, and 12 months after completing the final examination, with all responses recorded on Scantron forms. The retention tests consisted of 50 multiple-choice items, with 25 new items and 5 items randomly selected from each classroom examination. For control purposes, the 25 new items were generated from information on the same textbook pages from which repeated items had been generated.

Results

Participant Demographics

Potential differences in high school grade point average, current semester and overall college grade point averages, number of advanced placement credits awarded, Scholastic Aptitude Test percentile, credit hours completed, and all other dependent measures described below were examined using a multivariate analysis of variance (MANOVA) with response format (feedback: immediate, 24-hour, end-of-test, control, and sex of participant) as between-subject factors; Bonferroni adjustments were used for post hoc comparisons. Significance was observed for neither the main effects nor their interactions, all F < 0.46 for main effects, interactions, Wilks' Lambda, Pillai's Trace, and Hotelling-Lawley Trace, all p > .82, and thus all dependent measures described below were collapsed across these factors in the analyses described below.

Performance During the Academic Semester

No significant differences in any dependent measure described below were observed between participants assigned to the two delayed feedback conditions (end-of-test, 24 hours later), all F < 1, all p > .5. The data for these conditions were aggregated and are hereafter referred to as delayed feedback. As described above, complete data on five course examinations and the final examination were collected from 611 undergraduates, 263 male and 348 female.

Classroom examinations. Potential differences in examination scores were examined using a MANOVA with response format (feedback: immediate, delayed, control) as the between-subjects factor and classroom examination (1 to 5) as the within-subjects factor; Bonferroni adjustments were used for post hoc comparisons. Significance was observed for neither the main effects nor their interactions, all F < 0.26 for main effects, interactions, Wilks' Lambda, Pillai's Trace, and Hotelling-Lawley Trace, all p > 0.72, and thus the outcomes presented below do not appear to have been significantly influenced by differences in performance prior to the final examination.

Short-term retention. Potential differences in scores on the final examination (see Figure 2) were examined using a MANOVA with response format (feedback: immediate, delayed, control) as the between-subjects factor and item novelty (novel item, item repeated from classroom examination) as the within-subjects factor, with Bonferroni adjustments used for post hoc comparisons; significant differences were observed for both main effects and their interaction, all F > 5.03 for main effects, interactions, Wilks' Lambda, Pillai's Trace, and Hotelling-Lawley Trace, all p < .036. Scores were higher on the repeated test items for (a) both feedback groups than for controls, and (b) for the immediate feedback group than for the delayed feedback group, Bonferroni adjustments, all p < .05. The differences described above were specific to items repeated from course examinations, and thus conditional probabilities were calculated to assess learning during the examination and posttest reviews.

Conditional probabilities. Changes in responding were evaluated for test items administered initially on one of the classroom examinations and subsequently on the final examination by calculating the conditional probabilities of correct (C) and incorrect (I) responding on the initial (1) administration and subsequent (2) administrations of each item. The conditional probability values illustrate the beneficial effects of affirming initially correct responding and correcting initially inaccurate responding; potential between-group differences were used to estimate short-term retention during the academic semester. Potential differences in the four conditional probability measures were examined using a MANOVA, with response format (feedback: immediate, delayed, control) as the between-subjects factor and the number of weeks (3, 5, 7, 9, 11) between administration of an item on a classroom examination and the final examination as the within-subjects factor; Bonferroni adjustments were used for post hoc comparisons. Significant differences in the conditional probabilities were observed as a function of the main effects and interaction of response format and the number of weeks between the initial and subsequent administration of an item, all F > 11.89 for main effects, interactions, Wilks' Lambda, Pillai's Trace, and Hotelling-Lawley Trace, all p < .0016.

[FIGURE 2 OMITTED]

C2/C1 values (see Figure 3, top panel), which can be characterized as measures of both the retention of learning during the semester and the value of affirming correct responding, were higher for (a) the immediate feedback group than for the delayed feedback and control groups at Weeks 3 through 11, (b) the delayed feedback group than for controls at Weeks 3 through 7, (c) the immediate feedback group at Weeks 3 and 5 than at Week 11, and (d) the delayed feedback group at Weeks 3 and 5 than at Week 11, Bonferroni adjustments, all p < .01. C2/I1 values (see Figure 3, bottom panel), which can be characterized as measures of learning during the test-taking process and of the value of correcting initially inaccurate responses, were higher for (a) the immediate feedback group than for the delayed feedback and control groups at Weeks 3 through 11, (b) the delayed feedback group than for controls at Weeks 3 through 7, (c) the immediate feedback group at Weeks 3 and 5 than at Week 11, and (d) the delayed feedback group at Week 3 than at Week 11, Bonferroni adjustments, all p < .01. I2/C1 values (see Figure 4, top panel), which can be characterized as measures of forgetting during the semester and not profiting from the affirmation of correct responding, were lower for (a) the immediate feedback group than for the delayed feedback and control groups at Week 3, (b) the delayed feedback group than for controls at Weeks 3 through 7, (c) the immediate feedback group at Week 3 than at Week 11, and (d) the delayed feedback group at Week 3 than at Weeks 9 and 11, Bonferroni adjustments, all p < .05. I2/I1 values (see Figure 4, bottom panel), which can be characterized as measures of the failure to learn during the semester or from correction during an examination, or both, were lower for (a) the immediate feedback group than for the delayed feedback and control groups at Weeks 3 through 11, (b) the delayed feedback group than for controls at Weeks 3 through 7, (c) the immediate feedback group at Week 3 than at Weeks 9 and 11, and (d) the delayed feedback group at Week 3 than at Weeks 9 and 11, Bonferroni adjustments, all p < .05.

Postsemester Performance

As described above, complete data on the postsemester 3- through 12-month assessments were collected from 467 undergraduates, 196 male and 271 female. Control items were drawn from materials on the same textbook page as the repeated test items, and potential differences were evaluated using a MANOVA with response format (feedback: immediate, delayed, control) as the between-subjects factor. No differences in performance on the control items were observed as a function of response format, all F < 1.89 for response format, Wilks' Lambda, Pillai's Trace, and Hotelling-Lawley Trace, all p > .43. Collectively, these results suggest that participants did not review text materials prior to or between the Month 3 through Month 12 assessments.

[FIGURE 3 OMITTED]

[FIGURE 4 OMITTED]

Participant attrition. Attrition rates were remarkably constant across the factors of response format (feedback: immediate, delayed, control) and months since the final examination (months: 3, 6, 9, and 12). Potential differences in the four conditional probability values described below were examined using a MANOVA with response format (feedback: immediate, delayed, control) and attrition (status: did not complete all of the Month 3 to Month 12 assessments, completed all of the Month 3 to Month 12 assessments) as between-subject factors and postsemester interval (Month 3, 6, 9, 12) as the within-subjects factor. Significance was observed for neither the main effects nor their interactions, all F < 0.67 for main effects, interactions, Wilks' Lambda, Pillai's Trace, and Hotelling-Lawley Trace, all p > .85, and thus the outcomes presented hereafter do not appear to have been significantly influenced by differential rates of attrition.

Conditional probabilities. Changes in responding were evaluated for test items used on both a course examination and the Month 3 to Month 12 assessments by calculating the conditional probabilities of correct and incorrect responding. These analyses illustrate the long-term value of affirming initially correct responding and correcting initially inaccurate responding during classroom examinations. Potential differences in the four conditional probability measures were examined using a MANOVA with response format (feedback: immediate, delayed, control) as the between-subjects factor and postsemester month (month: 3, 6, 9, and 12) as the within-subjects factor; Bonferroni adjustments were used for post hoc comparisons. Significant differences in the conditional probabilities were observed as a function of the main effects and interaction of response format and postsemester month, all F > 9.72 for main effects, interactions, Wilks' Lambda, Pillai's Trace, and Hotelling-Lawley Trace, all p < .003.

C2/C1 values (see Figure 5, top panel), which can be characterized here as a measure of retention of learning beyond the academic semester, were higher for the immediate feedback group than for the delayed feedback and control groups, Bonferroni adjustments, all p < .05. C2/C1 values were also higher for the immediate feedback group at Month 3 than at Month 12, Bonferroni adjustments, all p < .05. C2/I1 values (see Figure 5, bottom panel), which can be characterized as a measure of the retention of learning during the test-taking process, were higher for the immediate feedback group than for the delayed feedback and control groups at Months 3 through 12, Bonferroni adjustments, all p < .05. I2/C1 values (see Figure 6, top panel), which can be characterized as a measure of forgetting, were lower for the immediate feedback group than for the delayed feedback and control groups at Months 3 to 12, Bonferroni adjustments, all p < .05. 12/11 values (see Figure 6, bottom panel), which can be characterized as a measure of the failure to learn, were lower for the immediate feedback group than for the delayed feedback and control groups at Months 3 to 12, Bonferroni adjustments, all p < .05.

[FIGURE 5 OMITTED]

[FIGURE 6 OMITTED]

Participant debriefings. The results of debriefing participants who did not matriculate into the psychology major indicated that (a) 76% had sold their textbooks at the end of the academic semester, (b) 62% could not locate class notes, (c) less than 3% discussed the Month 3 to 12 assessments with a former class member, and (d) less than 2% reviewed some introductory course materials during the Month 3 to 12 period. The results of debriefing participants who matriculated into the psychology major indicated that (a) 12% had sold their textbook at the end of the academic semester, (b) 28% could not locate class notes, (c) less than 7% discussed the Month 3 to 12 assessments with a former class member, and (d) less than 4% reviewed some introductory course materials during the Month 3 to 12 period.

Demographic Analyses

Role of high school psychology. Potential differences in the four conditional probability values described below were examined using a MANOVA with response format (feedback: immediate, delayed, control) and enrollment in additional psychology courses (status: did not complete high school psychology, completed high school psychology) as between-subject factors and postsemester interval (month: 3, 6, 9, and 12) as the within-subjects factor. Significance was observed for neither the main effects nor their interactions, all F < 0.19 for main effects, interactions, Wilks' Lambda, Pillai's Trace, and Hotelling-Lawley Trace, all p > .73, and thus the outcomes presented above do not appear to have been significantly influenced by completion of a high school course in psychology.

Role of high school science courses. Potential differences in the four conditional probability values described below were examined using a MANOVA with response format (feedback: immediate, delayed, control) and completion of high school science courses (number of completed science courses: 0, 1, 2 or more courses) as between-subject factors and postsemester interval (month: 3, 6, 9, and 12) as the within-subjects factor. Significance was observed for neither the main effects nor their interactions, all F < 2.01 main effects, interactions, Wilks' Lambda, Pillai's Trace, and Hotelling-Lawley Trace, all p > .58, and thus the outcomes presented above do not appear to have been significantly influenced by enrollment in high school science courses.

Discussion

A goal of an introductory course is to expose students to the foundations of an academic discipline. A more important goal is to help students acquire those foundations and to be able to recognize and recall them during and after the course. It is clear from the present results that the provision of immediate feedback effectively promotes the latter goal. A visual inspection of main effects and interactions in Figures 2-6 indicates that students provided with immediate feedback incorporated the informational aspects of that feedback into their cognitive processes--a supposition for which several lines of support emerge. Specifically, the increases in retention (a) were observed only for items repeated from previous classroom examinations, (b) did not significantly differ between participants enrolling in additional psychology courses and those not electing to take additional courses, and (c) did not differ between participants completing each of the Month 3 to 12 assessments and those who did not complete each assessment. The results described above do not appear to have been significantly influenced by the demographic variables of sex of participant, completion of a high school course in either psychology or the natural sciences, and differential participant attrition. Indeed, the only variable found to be of significant influence during both the short- and long-term assessments was the timing of feedback. The collective results of the present study support the growing body of literature showing that immediate feedback is more effective for classroom learning, and the retention of classroom learning, than delayed feedback. The feedback provided by the IF AT is an effective adjunctive tool that supports but does not supplant the educator (Brosvic, Epstein, Dihoff, & Cook, 2005; Brosvic et al., 2006a; Brosvic et al., 2006b; Epstein et al., 2003).

The IF AT provides individualized performance feedback during the testing process, regardless of the size of a lecture course. The IF AT method also allows the assignment of partial credit, and while some instructors may legitimately object to the awarding of partial credit, we have found that the process of answering until correct enhances students' ability to self-correct initially inaccurate responses. When an initially correct response is reinforced through affirmation, it is more likely to be repeated in future assessments. In contrast, feedback presented after the completion of more than one test question before the delivery of feedback (that is, feedback was withheld for a response to a question until after the next sequential test item has been answered; see Brosvic et al., 2006a, 2006b) appears to be of limited value, and this raises important questions about the value of the post hoc reviews of tests in faculty offices and the "postmortem" review of tests after their return. Motivation plays an important role in learning, and feedback given after the conclusion of an examination does not change the outcome of an already-completed test. Thus, students might not be motivated to learn about what the correct answer "should have been." On the other hand, if feedback is presented while the student is answering a question and that feedback can be used for a subsequent attempt at an answer for partial credit, motivation persists and learning occurs. This outcome may be specific to multiple-choice tests; a postmortem review of responses to essay questions may be of value, and an examination of this possibility is currently under evaluation in our laboratories and classrooms.

An additional line of inquiry undertaken in this study was the examination of two demographic variables previously reported to influence performance in the introductory course: completion of high school courses in psychology and in the natural sciences (e.g., Carstens & Beck, 1986; Dombrodt & Popplestone, 1975; Nathanson, Paulhus, & Williams, 2004; Thompson & Zamboanga, 2003). It is intuitive that prior exposure to course materials at the high school level might enhance either students' recognition or acquisition at the college level, but this relationship was not observed in the present study. However, this outcome is not surprising, as participants having completed such coursework reported that their assessments at the high school level were completed in the absence of immediate feedback.

We have found that immediate feedback transforms the multiple-choice examination into a learning opportunity and the student into an active learner. This combination provides for a predictable synergy that creates a number of "learnable moments" in which the educator can be passive, and these learnable moments are more likely than the vaunted "teachable moments" that educators prize even though such moments defy definition and are said to be unpredictable. A learnable moment, in contrast, is predictable. It occurs when a problem, a proposed solution, and feedback are concurrent in the presence of motivation. All of the above exist when a multiple-choice test giver employs the IF AT and allows partial credit for proximate knowledge while answering until correct.

The multiple-choice items in the present study primarily assessed a student's knowledge of the definitions of basic psychological concepts rather than a student's ability to apply these same concepts. In the domain of operant conditioning, for example, test items focused on the definitions of the schedules of reinforcement rather than on the identification of the schedule in effect in a hypothetical situation. Additional studies are currently in progress to examine whether the enhanced retention observed for students tested with immediate feedback translates into an empirical transfer of learning to next-level courses in an academic major. Similar enhancements and connections have been posited for any number of teaching innovations that have appeared in the past few years (e.g., writing across the curriculum, journaling, problem-based learning), most of which have been temporal in popularity and not supported by objective measures of student learning. The principles that underlie feedback are among the oldest and most vetted in the psychological and educational sciences, and the IF AT capitalizes on these principles while requiring neither changes to pedagogical processes nor to assessment ones. If a carryover effect of learning in the introductory course to the next-level course can be demonstrated, then immediate feedback may provide one of the means of building connections between the introductory course and next-level courses within an academic major, as well as a means through which such connections could be built into a curriculum. To these ends, the IF AT is presented to the larger community of educators for continued validation and development.

References

BLOOM, B. S. (1968). Learning for mastery. Evaluation Comment, 1, 1-12.

BROSVIC, G. M., EPSTEIN, M. L., DIHOFF, R. E., & COOK, M. J. (2005). Efficacy of error for the correction of initially-incorrect assumptions and of feedback for the affirmation of correct responding: Learning in the classroom. The Psychological Record, 55, 401-418.

BROSVIC, G. M., EPSTEIN, M. L., DIHOFF, R. E., & COOK, M. J. (2006a). Feedback facilitates the acquisition and retention of numerical fact series by elementary school students with mathematics learning disabilities. The Psychological Record, 56, 35-54.

BROSVIC, G. M., EPSTEIN, M. L., DIHOFF, R. E., & COOK, M. J. (2006b). Adjunctive role for immediate feedback in the acquisition and retention of mathematical fact series by elementary school students classified with mild mental retardation. The Psychological Record, 55, 39-66.

BROSVIC, G. M., EPSTEIN, M. L., DIHOFF, R. E., & COOK, M. J. (2006c). Retention of Esperanto is affected by delay interval task and item closure: A partial resolution of the delay-retention effect. The Psychological Record, 56, 597-615.

BROWN, A. S., SCHILLING, H., & HOCKENSMITH, M. L. (1999). The Negative Suggestion Effect: Pondering incorrect alternatives may be hazardous to your knowledge. Journal of Educational Psychology, 91, 756-764.

BRUNING, R., SCHRAW, G., & RONNING, R. (1999). Cognitive psychology and instruction. Columbus, OH: Merrill Prentice Hall.

CARSTENS, C. B., & BECK, H. P. (1986). The relationship of high school psychology and natural science courses to performance in a college introductory psychology course. Teaching of Psychology, 13, 116-118.

CEDERSTORM, J. A. (1930). Retention of information gained in courses in college zoology. Journal of Genetic Psychology, 38, 516-520.

DIHOFF, R. E., BROSVIC, G. M., & EPSTEIN, M. L. (2003). The role of feedback during academic testing: The delay retention effect revisited. The Psychological Record, 53, 533-548.

DIHOFF, R. E., BROSVIC, G. M., & EPSTEIN, M. L. (2004). Provision of feedback during preparation for academic testing: Learning is enhanced by immediate but not delayed feedback. The Psychological Record, 54, 207-231.

DOMBRODT, F., & POPPLESTONE, J. (1975). High school psychology revisited: Student performance in a college-level psychology course. Teaching of Psychology, 13, 129-133.

EPSTEIN, M. L., BROSVIC, G. M., DIHOFF, R. E., LAZARUS, A. D., & COSTNER, K. L. (2003). Effectiveness of feedback during the testing of preschool children, elementary school children, and adolescents with developmental delays. The Psychological Record, 53, 177-195.

EPSTEIN, M. L., LAZARUS, A. D., CALVANO, T. B., MATTHEWS, K. A., HENDEL, R. A., EPSTEIN, B. B., & BROSVIC, G. M. (2002). Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. The Psychological Record, 52, 187-201.

EURICH, A. D. (1934). Retention of knowledge acquired in a course in general psychology. Journal of Applied Psychology, 18, 209-219.

GOLDWATER, B. C., & ACKER, L. E. (1975). Instructor-paced, mass testing for mastery performance in an introductory psychology course. Teaching of Psychology, 15, 151-152.

GREENE, E. B. (1931). The retention of information learned in college courses. Journal of Educational Research, 24, 262-273.

HURSH, D. E. (1976). Personalized systems of instruction: What do the data indicate? Journal of Personalized Instruction, 1, 91-105.

JOHNSON, P. O. (1930). The permanence of learning in elementary botany. Journal of Educational Psychology, 21, 37-47.

KELLER, F. S. (1968). Good-bye teacher. Journal of Applied Behavior Analysis, 1, 79-89.

KULHAVY, R. W., & STOCK, W. A. (1989). Feedback in written instruction: The place of response certitude. Educational Psychology Review, 1, 279-308.

KULIK, J. A., JASKA, P., & KULIK, C. (1978). Research on component features of Keller's personalized system of instruction. Journal of Personalized Instruction, 3, 2-14.

LOFTUS, E., & HOFFMAN, H. G. (1989). Misinformation and memory: The creation of new memories. Journal of Experimental Psychology: General, 118, 100-104.

NATHANSON, C., PAULHUS, D. L., & WILLIAMS, K. M. (2004). The challenge to cumulative learning: Do introductory courses actually benefit advanced students? Teaching of Psychology, 31, 5-9.

ROBIN, A. L. (1978). The timing of feedback in personalized instruction. Journal of Personalized Instruction, 3, 81-87.

SEMB, G. B., & ELLIS, J. A. (1992, April). Knowledge learned in college: What is remembered? Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA.

SEMB, G. B., ELLIS, J. A., & ARAUJO, J. (1993). Long-term retention for knowledge learned in school. Journal of Educational Psychology, 85, 305-316.

SKINNER, B. F. (1958) Teaching machines. Science, 128, 969-977.

SPITZER, H. F. (1939). Studies in retention. Journal of Educational Psychology, 30, 641-656.

THOMPSON, R. A., & ZAMBOANGA, B. L. (2003). Prior knowledge and its relevance to student achievement in introduction to psychology. Teaching of Psychology, 30, 96-101.

TYLER, R. W. (1933). Permanence of learning. Journal of Higher Learning, 4, 203-204.

WERT, J. E. (1937). Twin examination assumptions. Journal of Higher Education, 8, 136-140.

GARY M. BROSVIC and MICHAEL L. EPSTEIN

Rider University

Address correspondence to Gary M. Brosvic, Department of Psychology, Rider University, Lawrenceville, NJ 08648. (E-mail: Brosvic@rider.edu.)
COPYRIGHT 2007 The Psychological Record
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2007 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Brosvic, Gary M.; Epstein, Michael L.
Publication:The Psychological Record
Article Type:Report
Geographic Code:1USA
Date:Jun 22, 2007
Words:5599
Previous Article:The derived transfer and reversal of mood functions through equivalence relations: II.
Next Article:Operant conditioning in older adults with Alzheimer's disease.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters