ASSESSING STUDENT'S KNOWLEDGE OF THE AGING PROCESS.
While the demographics are impressive, the impact of the "baby-boom generation" reaching retirement age has yet to be felt. Thirteen years from now, the first of the "baby-boom generation" will reach what is now considered to be retirement age. By the year 2030, the entire "baby-boom" cohort will have passed what is now considered to be the traditional age of retirement (AARP, 1996).
In light of these demographic shifts, academicians have gradually been developing curriculum designed to prepare individuals to respond to the aging of the American population. In 1957, only 57 institutions offered courses in gerontology (Donohue, 1960). By the late 1980s, however, the number of U.S. institutions offering courses in gerontology had grown to approximately 1,155 (Peterson, Douglass, Connelly, & Bergestone, 1987). Today, it is estimated that more than 1,600 institutions offer courses in gerontology (Peterson, Wendt, & Douglass, 1994).
While the increase in the number of gerontology programs in American colleges and universities is to be expected based on the demographic shifts in the population, an enduring question that accompanies the growth centers around the effectiveness of the ever-expanding gerontology curriculum. Thus, the primary purpose of this study is to assess the instructional effectiveness of a gerontology course using Palmore's Facts on Aging Quiz I (FAQ I) and Facts on Aging Quiz II (FAQ II). Specifically, the impact of a one-semester course in gerontology on student's knowledge of the aging process and their biases toward the elderly will be examined.
Previous Uses of FAQ I and FAQ II
Since their initial appearance in the empirical literature (Palmore, 1977; 1981), FAQ I and FAQ II have been widely used. The reasons for the popularity of the quizzes are numerous. The quizzes are short (25 true or false items in each quiz), designed to cover the basic physical, mental, and social facts about aging, have been empirically documented, and have been tested for validity and reliability (Duerson, Thomas, Chang, and Stevens, 1992; Palmore, 1977; 1980). In addition, the results of the quizzes can be used in a variety of ways including stimulating group discussion, identifying misconceptions about aging, indirectly measuring bias toward the aged, and evaluating the effectiveness of training in gerontology (Palmore, 1981).
The results of several studies using the quizzes have been reported with samples including social workers (Barresi & Brubaker, 1979), retirees (Miller & Acuff, 1982), adolescents (Doka, 1985-86), and medical students (Duerson et al., 1992). In addition, numerous samples of undergraduate students have been studied (Courtenay & Weidemann, 1985; Luszcz, 1982; Miller & Dodder, 1980; Palmore, 1977; 1981). As a general rule, individuals with training in gerontology have scored higher on the quizzes (Palmore, 1980). Citing several studies that employed a pre-test, post-test format, Palmore (1980) concluded that individuals who received training in gerontology consistently scored higher on the post-test then they did on the pre-test.
While both quizzes have remained virtually the same since their first appearance in the empirical literature, various scholars have attempted to build upon and improve FAQ I and FAQ II. For instance, hypothesizing that FAQ I was characterized by serious theoretical problems due to vague terminology, Miller and Dodder (1980) developed a revised version of the instrument. The results of their study, however, indicated that changing the wording of FAQ I did not make a difference in how questions were answered. They did recommend, though, that a third response option of "don't know" be added to the instrument in order to reduce the amount of guessing done by respondents.
Building upon the recommendation of Miller and Dodder (1980), Courtenay and Weidemann (1985) included a "don't know" response in their study of undergraduate students. Their findings indicated that the addition of a "don't know" response option helped to reduce guessing which increased the likelihood of a more accurate score.
In order to assess the instructional effectiveness of a course in gerontology, a pre-test, post-test format was used with a control group and an experimental group (total N = 55). The control group consisted of students enrolled in an undergraduate criminal justice course at a university in the southwest (N = 27). The experimental group consisted of students enrolled in an undergraduate social gerontology course at the same university (N = 28). The experimental group consisted primarily of students who were not majoring in social gerontology but enrolled in the course due to its status as a liberal arts elective. Table 1 provides a demographic summary of both groups.
Table 1 Experimental Control Group Group (CJ Class) (SGER Class) N 27 28 Mean Age 22.85 23.46 Sex Males 16 6 Females 11 22 Classification Senior 9 11 Junior 14 14 Sophomore 4 3 Major Criminal Justice 17 2 Sociology 3 11 Psychology 1 7 Social Gerontology 0 6 Other 6 2
FAQ I and FAQ II were combined to create a 50 question instrument regarding the aging process. Based on the recommendations of Miller and Dodder (1980) and Courtenay and Weidemann (1985), a "don't know" response option was added in order to reduce the amount of guessing. Members of both groups were asked to complete FAQ I and FAQ II at the beginning of the semester and again near the end of the semester. Due to withdrawals from the courses during the semester, the number of students who completed the pretest was slightly larger than the number who completed the post-test. However, only those students who completed both the pre-test and the post-test were included in the study.
Following the pre-test, the quizzes and responses were collected and evaluated by the authors. For each correct answer, 1 point was awarded. For each incorrect answer, .5 was subtracted. A "don't know" response was assigned 0 points. This scoring system was selected because it focused on the number of correct answers given without penalizing the respondents for using the "don't know" option (Duerson, et al., 1992). A mean score was computed along with a bias score for each group. Palmore (1977; 1980) classified 16 items from FAQ I as indicating a negative bias toward the aged if they were marked incorrectly. FAQ I also included 5 items that indicated a positive bias toward the aged if they were marked incorrectly. FAQ II included 14 items that could be used to determine a negative bias toward the aged and 5 items that indicated a positive bias toward the aged. The percentage of errors on the negative bias items were subtracted from the percentage of errors on the positive bias items to produce a net bias score for each group. By subtracting percentages rather than raw numbers the imbalance between the number of negative and positive items was accommodated (Palmore, 1981).
Students were not informed of the results of the pre-test nor was any mention made of the forthcoming post-test. This was done to reduce the possibility of a practice or memory effect. Following the post-test, a mean and bias score for each group was calculated again in order to allow for a comparison of pre-test and posttest results. Once the post-test was evaluated, students were given the opportunity to review the questions, the results of both quizzes, and the documentation associated with the correct responses.
The results of the pre-test and post-test for both groups are presented in Table 2. At Time 1, the mean score for the control group was 13.889 with a standard deviation of 5.85. The range of scores was 2.5 to 25.5. The mean score at Time 1 for the experimental group was slightly higher at 17.456 with a standard deviation of 6.87. The range of scores was 2.5 to 32. The net bias score for the control group was -9.17 while the net bias score for the experimental group was -6.6. Members of the control group were more likely than members of the experimental group to use the "don't know" response option.
Table 2 Experimental Control Group Group (CJ Class) (SGER Class) N 27 28 Mean Score Time 1 13.889 17.46 Time 2 14.185 24.107(*) Range Time 1 2.5-25.5 2.5-32.0 Time 2 3.0-27.5 13.5-32.0 S.D. Time 1 5.85 6.87 Time 2 6.28 4.22 Net Bias Time 1 -9.17 -6.6 Time 2 -9.20 -14.0 %DK Response Time 1 32.12% 20.36% Time 2 26.74% 7.29%
(*) t=5.31, p<.01
At Time 2, the mean score of the control group was similar to the Time 1 mean score. The range and standard deviation changed somewhat, though not excessively. For the experimental group, the mean score at Time 2 was noticeably higher than at Time 1. In addition, the range of scores decreased, as did the standard deviation. The net bias score at Time 2 for the control group was nearly identical to Time 1. For the experimental group, the net bias score more than doubled from Time 1 to Time 2. Individuals in both groups were less likely to use the "don't know" response option at Time 2.
A t-test was used to determine if the increase in mean scores for each group was statistically significant. For the control group, the t-value was .21 and was not statistically significant at the desired level. For the experimental group, the t-value was 5.31 and was statistically significant at the .01 level.
The primary purpose of this study was to assess the effectiveness of gerontological instruction by examining the impact of a one-semester course in gerontology on student's knowledge of the aging process and their biases toward the elderly. Three measures were of particular interest including the t-test results, the net bias scores, and the extent to which the "don't know" response option was used. Overall, the results of the study indicate that students who were enrolled in the gerontology course did benefit from the instruction. This finding is consistent with the work of Palmore (1980) who concluded that individuals who received training in gerontology consistently scored higher on the post-test then they did on the pre-test.
The results of the t-tests indicated a noticeable increase in the mean scores for the experimental group. With statistical significance at the .01 level, this finding is a strong indication that the semester of instruction did contribute positively to the student's knowledge of the aging process. It is also worth noting that the range of scores was much smaller at Time 2 than at Time 1 for the experimental group. Apparently, students who enrolled in the course with very little knowledge of the aging process benefitted most from the semester of instruction. The negligible increase in mean scores for the control group appears to be a further indication that successfully completing a course in gerontology enhances one's knowledge of the aging process.
As expected, the net bias scores for the control group were basically the same at Time 1 and Time 2 with more negative than positive bias being present. The net bias scores for the experimental group also reflected more negative than positive bias but to a much greater extent at Time 2 than Time 1. While the presence of more negative than positive bias has been found by other scholars (Palmore, 1977;1981; Courtenay & Weidemann, 1985), the large increase in negative bias for the experimental group is difficult to explain. One would think that the net bias score would move in a positive direction as students became more aware of the realities of aging and debunked many of the myths associated with the aging process. Apparently, the course curriculum did not adequately address some of the student's misconceptions of aging. This represents an area where action can be taken to improve the curriculum for subsequent classes.
The final area of interest revolves around the use of the "don't know" response option. Based upon the recommendation of Miller and Dodder (1980), Courtenay and Weidemann (1985) included a "don't know" response in their study of undergraduate students and found that the addition of a "don't know" response option helped to reduce guessing which increased the likelihood of a more accurate score. The findings of the present study seem to support the previous research. While the percentage of "don't know" responses decreased somewhat for the control group from Time 1 to Time 2, there was not a corresponding increase in the mean score. For the experimental group, however, the percentage of "don't know" responses decreased dramatically from Time 1 to Time 2. At the same time, the mean score increased at a statistically significant level. Since the scoring system was designed to reward correct responses while providing a "don't know" option so that students were not required to guess, the combination of an increased mean score with a reduction in the number of "don't know" responses suggests that members of the experimental group did increase their knowledge of the aging process.
Summary and Implications
As the number of colleges and universities offering courses in gerontology increase, greater attention will need to be focused on the content of the courses and the extent to which students are acquiring knowledge of the aging process. The present study has assessed the extent to which a group of undergraduate students increased their knowledge of the aging process as measured by Palmore's FAQI and FAQII. The results indicate that students who successfully completed a one-semester course in gerontology did increase their knowledge of the aging process as measured by two of the three assessment methods. The mean score for the experimental group increased at a statistically significant level from Time I to Time 2. In addition, the number of students in the experimental group who used the "don't know" response option decreased from Time 1 to Time 2. The contradictory finding was that students in the experimental group exhibited more negative bias in their responses at Time 2 than at Time 1.
More research is needed in this area as the gerontological curriculum expands in response to the "graying" of the American population. In addition, more assessment tools need to be developed in order to evaluate the growing gerontological curriculum. Finally, future studies that do employ Palmore's FAQI and FAQII should include the "don't know" response option in order to gain a more accurate assessment of what students know about the aging process as opposed to their ability to guess correctly.
American Association of Retired Persons. (1996). A Profile of Older Americans. Washington, DC: PF3049 (1296) D996.
Atchley, R. C. (1997). Social forces and aging: An introduction to social gerontology (8th ed.). Belmont, CA: Wadsworth Publishing.
Barresi, C. M., & Brubaker, T. H. (1979). Clinical social workers' knowledge about aging: Responses to the "Facts on Aging" Quiz. Journal of Gerontological Social Work, 2(2), 137-146.
Courtenay, B. C., & Weidemann, C. (1985). The effects of a "don't know" response on Palmore's Facts on Aging Quizzes. Gerontologist, 25(2), 177-181.
Doka, K. J. (1985-86). Adolescent attitudes and beliefs toward aging and the elderly. International Journal of Aging and Human Development, 22(3), 173-187.
Donahue, W. (1960). Training in social gerontology. Geriatrics, 15, 501.
Duerson, M. C., Thomas, J. W., Chang, J., & Stevens, C. B. (1992). Medical students' knowledge and misconceptions about aging: Responses to Palmore's Facts on Aging Quizzes. Gerontologist, 32(2), 171-174.
Luszcz, M. A. (1982). Facts on aging: An Australian validation. Gerontologist, 22(4), 369-372.
Miller, R. B., & Acuff, G. F. (1982). Stereotypes and retirement: A perspective from the Palmore Facts on Aging Quiz. Sociological Spectrum, 2(2), 187-199.
Miller, R. B., & Dodder, R. A. (1980). A revision of Palmore's Facts on Aging Quiz. Gerontologist, 20(6), 673-679.
Palmore, E. (1977). Facts on aging: A short quiz. Gerontologist, 17(4), 315-320.
Palmore, E. (1980). The facts on aging quiz: A review of findings. Gerontologist, 20(6), 669-672.
Palmore, E. B. (1981). The facts on aging quiz: Part two. Gerontologist, 21(4), 431-437.
Peterson, D., Douglass, B., Connelly, R., & Bergestone, D. (1987). A national survey of gerontology instruction in American institutions of higher education. Washington, DC: The Association for Gerontology in Higher Education.
Peterson, D. A., Wendt, P. F., & Douglass, E. B. (1994). Development of gerontology, geriatrics, and aging studies programs in institutions of higher education. Washington, D.C: The Association for Gerontology in Higher Education.
JAMES L. KNAPP, PH,D. PATRICIA STUBBLEFIELD, J.D.
Sociology Southeastern Oklahoma State University Durant, Oklahoma 74701
|Printer friendly Cite/link Email Feedback|
|Author:||KNAPP, JAMES L.; STUBBLEFIELD, PATRICIA|
|Date:||Sep 22, 1998|
|Previous Article:||BRIDGING ACADEMIES AND ECONOMIES FOR THE FUTURE: LEADERSHIP AND PARTNERSHIP FOR "TOWN AND GOWN" COMMUNITIES.|
|Next Article:||INTRODUCTION TO THE SPECIAL ISSUE: A SYSTEMS APPROACH TO CHILDREN'S PROBLEMS.|