Printer Friendly

Assessing Students' Course-Related Attitudes Using Keller's Model of Academic Motivation.

Abstract

Most end-of-course evaluation instruments elicit students' attitudes about instructors and the role that they play in the teaching/learning process. As thinking about university teaching becomes more student-centered, assessment must focus more on student learning outcomes and students' attitudes about what they are learning and their role in the teaching and learning process. This paper uses Keller's ARCS model of academic motivation as a theoretical base for exploring the assessment of students' end-of-course attitudes. The development of the Academic Motivation Profile (AMP) along with research on the viability and utility of the instrument are described. Readers are then introduced to several adaptations of the AMP for different subject matter and then guided through a process for developing their own theory-based instrument for assessing their students' academic motivation.

Introduction

The focus on course outcomes, use of data for continuous course and program improvement, and accountability in higher education have become major emphases in recent standards from both regional accreditation agencies and professional societies. The two course outcomes that have traditionally been used for systematic assessment of course quality are students' achievement levels and their attitudes at the conclusion of a course. Most measures of students' attitudes in higher education are about the instructor and the course, rather than the students themselves; and the primary use for such measures has been for annual review of faculty for promotion, tenure, and merit pay. This approach to measuring attitudes serves administrative purposes, but it does not assess students' motivation for learning, an ingredient of the teaching/learning process thought to be critical by most cognitive, developmental, and constructivist psychologists (Covington, 1998; Lambert & McCombs, 1998).

In addition to satisfying administrative needs, a valuable purpose for collecting information about a course is for formative evaluation (Dick, Carey & Carey, 2001). In formative evaluation, data are collected for the purpose of improving a course through revising course management, pedagogy, and content. Formative evaluation is most efficient when the data collected relate directly to the important pedagogical aspects of the course. For example, if an instructor views student motivation as an important aspect of learning, then formative data should provide information for the instructor that will confirm strong and detect weak student motivation. Further, if the instruments used for formative data collection are anchored in the theoretical foundations that underlie one's views of teaching and learning, then shortcomings in a course can be addressed systematically from theoretical knowledge of the effects that revisions should have on course outcomes.

The purpose of this paper is to present a theoretical base for measuring students' academic motivation and to describe the Academic Motivation Profile (AMP), an instrument developed from that theory for use in an undergraduate course in classroom assessment (Carey, 1991). Then research that has been conducted on the AMP will be summarized and examples will be given of adaptations of the instrument for use with other courses in different subject matter. The paper will conclude by taking readers through a template for developing a theory-based instrument for measuring students' academic motivation in courses and subject matter of their choice.

Background and Theoretical Underpinnings of Keller's ARCS Model of Academic Motivation

John Keller developed the ARCS Model of Academic Motivation that includes the four dimensions of attention, relevance, confidence, and intrinsic satisfaction (Keller, 1987a, 1987b, and 1987c; Keller & Suzuki, 1988). His model pulls together many facets of theory from 50 years of research related to enhancing the motivational value of instruction. Keller's original writings were intended for instructional designers and developers, so his work includes a focus on practical motivational interventions that an instructor can implement in developing and conducting a course.

* Attention, the first component of the ARCS model, is the degree to which different aspects of a course arouse and maintain students' interest and curiosity. The theoretical base for the attention factor includes theories of information processing related to human learning and memory: curiosity, particularly Berlyne's work from the 1960's; arousal; sensation seeking, notably Zukerman's research from the early 1970's; and stimulus variability. Basically, students must attend to the course materials, lectures, discussions, practice activities, and so forth, or learning will not occur.

* Relevance is the perceived value of the course for fulfilling students' current and future aspirations. Theories related to a student's perception of course relevance include hierarchy of needs and self-actualization. Students who perceive course outcomes as relevant to their personal needs and professional futures will more likely attend to instruction and persevere in a course.

* Confidence is the degree of self-assurance students have that they can be successful with both the cognitive and affective course outcomes. The theoretical base for the confidence factor includes elements from locus of control, Bandura's self-efficacy theory, Weiner's attribution theory, and Eccle's expectancy of success. It is believed that both over- and under-confidence hamper learning. Students who believe that new skills are totally out of their range of capability will not persevere, and students who believe they know it all will not attend to tasks at hand. Learners who are challenged, but believe they can succeed, learn most readily.

* Satisfaction, the fourth factor in the model, is the degree to which students believe the course is personally rewarding or satisfying. The theoretical base for this factor includes feedback, reinforcement, self worth, and social context. Generally, students tend to sustain learning activities when they believe that, as a result of developing new capabilities, they have more personal value and more to offer others. As conceived by Keller, academic motivation is a complex, multidimensional construct, and it follows that assessing students' levels of academic motivation requires gathering information on multiple dimensions.

Development of the Academic Motivation Profile for Course Assessment

The Academic Motivation Profile (AMP) was developed using the four main variables from the ARCS model. It is used to assess students' attention to instructional aspects of the course, perceptions of the relevance of instruction and learning outcomes for their personal and professional needs, confidence in performing course learning outcomes, and personal satisfaction with the learning experience. The AMP was formatted so that the same structure would carry through each of the four ARCS variables. The formatting can be reviewed on line in a copy of the AMP at the following URL: <http://luna.cas.usf.edu/~carey/aeq/amp.htm>. The following paragraphs describe the process through which the AMP instrument was developed.

The first step in the development process was to write a descriptive title for each variable and define the variable for students, disclosing the nature of what the professor is asking and helping to ensure that students understand and interpret the variable within the context of the course. For example, attention was defined for students as: "Various aspects of this course may or may not have gained your attention. For the following course aspects, rate your attention level as ... " These brief definitions set the tone and context for student respondents.

Second, the response scale was defined, levels of agreement are named, and response labels are linked to the named variable. Again using the attention factor to illustrate, the following levels and labels are used:

I was:

1. Not the least bit interested, and my attention always wandered; 2. Slightly interested, and my attention frequently wandered; 3. Moderately interested, and my attention occasionally wandered; 4. Very interested, and my attention rarely wandered; 5. Extremely interested, and my attention did not wander.

Notice that the labels describe a continuous progression of levels of attention from "not the least bit" to "extremely interested" rather than a bipolar "strongly disagree" to "strongly agree" response format. The linear progression scale format fits the logic of the question much better than the bipolar format.

The third step in the development process was to create the items by naming course aspects that are relevant for each variable. This is the point where the AMP is tailored for particular courses and course elements. Using the attention factor again as an illustration, to what elements in a course should students attend? For example, in the original AMP there are a total of nine attention items in three categories: textbook, class presentations, and participation in class. Questions within the textbook section were tailored to the format of the textbook used in the course, and within the textbook section, students were asked to rate their level of interest in the explanations and information; the examples, charts, graphs, and illustrations; and the practice exercises with feedback. Items within the class presentations section were linked to instructional presentations and demonstrations, and items within the class participation section were related to the participation opportunities within the course. Even though the academic motivation theory and the four variables in the ARCS model remain in the instrument, the AMP is not a "one-size-fits-all-courses" assessment instrument. To be useful for formative evaluation, each of the four academic motivation variables must be tailored to the unique elements of a particular course.

There are three levels of scoring and interpreting the AMP for formative evaluation of a course. The most basic is at the single item level. At the item level, instructors can answer questions such as, "How interesting were my in-class presentations and demonstrations for my students?" At the next higher level, faculty can aggregate clusters of topic specific item data to answer questions such as, "How interested were students in the required readings, in my classroom presentations, or in our class participation activities." The third level of data aggregation is used to obtain student perceptions of the course at the dimension level, and instructors can answer questions related to each variable such as, "Does the instruction in this course hold the attention of students?" and "How relevant do students perceive the course to be for them?"

Research with the AMP

The Academic Motivation Profile (AMP) has been examined with a series of studies since 1990. These studies consistently demonstrate that the instrument yields psychometrically sound, useful course evaluation data across students, courses, programs, and instructional delivery formats. A sample of them related to the basic psychometric properties (reliability and validity), generalizability, and utility are described in the following paragraphs.

Reliability

The AMP consistently yields strong internal consistency reliability estimates for the overall scale (Cronbach's Alpha > .94 or higher). The four factors consistently yield high internal consistency as well (Cronbach's Alpha 0.83 - 0.94) (Carey, 1991; Pearson, 1992; Carey, et al., 1994; Dedrick, et al., 1997).

Validity

As mentioned previously, the AMP was designed using Keller's ARCS model with four theoretically based factors of attention, relevance, confidence, and satisfaction (Keller, 1987). These theory-predicted factors were consistently observed using confirmatory factor analysis of data gathered with the instrument (Pearson, 1992; Carey, et al., 1994; Dedrick, et al., 1997). To examine whether the AMP could be modified for other courses with different instructional aspects and course outcomes, the four AMP factors were modified to reflect the instructional delivery procedures, content, and intended outcomes for three additional courses, including educational psychology, social foundations of education, and curriculum (n=765). Using confirmatory factor analysis to examine student ratings, the four-factor model generalized across the three different disciplines (Pearson, 1992).

Criterion validity was investigated by correlating AMP results with end of course achievement (Pearson, 1992). The correlation of AMP scores with achievement (Pearson Product Moment Correlation Coefficient = .30) is significant, and it is typical of attitude and achievement comparisons. This low but significant correlation demonstrates that the AMP measures an affective characteristic distinct from end of course achievement. Convergent validity was examined by comparing students' AMP scores with students' free responses using an instrument designed to measure the same construct, and a significant positive correlation was observed (Pearson, 1992).

The meaning of students' scores, or the validity of score interpretations, was studied using the AMP by examining the relationship between pre-course expectations and end of course evaluations (Carey, Carey, and Pearson, 1992) as well as the effects of non-instructional variables (e.g., major area of study, hours employed outside class, and section) on affective outcomes (Carey, et al., 1994). These authors found no significant differences between students' initial attitudes and their end of course evaluations. Carey, et al., (1994) also found a significant positive relationship between initial expectations and end of course evaluations and further found that the relationship varied by major. This congruence between pre- and post- measures is usually explained by attributing students' course ratings to a latent trait of their overall perceptions of the school or schooling rather than to happenings within a particular course (Crittendon & Norr, 1973; Finaly & Neumann, 1985, Carey, et al., 1997). While this latent trait phenomenon is bothersome for administrators who use end of course assessments for personnel evaluation purposes, it is not a problem for learning theorists, instructional designers, and faculty members whose evaluation purposes are course refinement. From the perspective of learner-centered instruction (Lambert & McCombs, 1998) and of accreditation standards related to continuous evaluation and refinement of instruction, discovering particular course aspects that students neither predict to be interesting or relevant nor umd them to be so after the fact, point to specific areas of instruction in need of revision.

Instrument and Procedures Modification Studies

The usefulness of the AMP depends upon a faculty member's ability to modify the instrument to fit the special circumstances within a course. To further examine the versatility of the AMP, studies were conducted to investigate different formatting and administration procedures. One formatting study compared the use of a masked personality type format with a transparent achievement format. A masked personality type format, frequently used with measures of attitude or personality, hides the tree purpose of the assessment and scatters items from each dimension of the construct throughout the instrument. The transparent achievement format presents items grouped by clearly defined and labeled dimensions. Respondents (n=376) were randomly assigned to one of the two conditions. Confirmatory factor analysis was used to examine the fit of the model for each format. Although both formats fit the measurement model reasonably well, data from the more straightforward achievement model fit the theoretical academic motivation model somewhat better than the masked format (Carey, et al., 1994). For faculty tailoring the AMP to their own course, this suggests that the superior instrument format is to cluster together items related to each dimension and to introduce each dimension to students using a title and dimension definition.

Some faculty may wish to correlate students' academic motivation scores with other performance indicators such as class achievement or mandatory course evaluation instruments. To examine the viability of this type of comparison, an experiment was conducted to compare AMP results obtained from students responding anonymously with results of students who identified themselves on their response forms. No significant differences were observed between the two response conditions (Carey, Carey & Pearson, 1992). This suggests that students believe they are rating the course rather than the instructor, which is the case. Having students identify themselves enables faculty members to link students' academic motivation with other variables of interest such as course achievement, major area of study, and number of hours employed.

Faculty differ in the way they construct item response scales, so other studies examined the format of the response scale used with the items. One study compared a reversed-order scale (1 is high and 5 is low) with the existing AMP scale (1 is low and 5 is high) and found no significant differences between students' scores using the two formats (Carey, et al., 1997). Although research demonstrates that this low to high ordering of responses is not necessary, it does facilitate direct correlational studies with achievement scores or other comparison scores without reversing the attitude scale. Another study compared a four-point to a five-point scale, with no significant differences between students' responses using the two formats (Carey, et al., 1997). These findings suggest that authors have options for response formats in their own instrument designs.

Changes in students' academic motivation were examined across various points in time during the semester (Ferron, Dedrick & Carey, 1994; Dedrick, et al., 1995). The results indicate that, while students' initial expectations were consistent with their end of course ratings, their attitudes did change at examination points throughout the course, providing specific content-related information for formative evaluation. Although students' perceptions of their attention toward instruction and instructional relevance dropped slightly as the semester progressed, their confidence in performing the skills studied and their satisfaction with their skill development increased throughout the course. Further work in this area may provide more explanation with regard to students' attitude change as they move through a semester.

Example Adaptations for the AMP

The Academic Motivation Profile has been adapted for multiple other courses, and we describe and illustrate two adaptations in this article. The first adaptation was for the same assessment course but for web-based Interact distance delivery. Distance faculty are often unable to observe first hand students' attitudes through their classroom behavior, so they may need to modify the instrument to collect more information to monitor adequately students' academic motivation. The modified instrument may be viewed at the following URL: <http://luna.cas.usf.edu/~carey/aeq/ampdistance.htm>. In the creation of the distance version of the AMP, the professor maintained the four ARCS variables and maintained the variable definitions as advanced organizers. In addition, course instructional elements were adapted for web-based delivery, both attention and relevance were linked to specific instructional activities, and outcome skills statements in the confidence variable were less global. The intrinsic satisfaction variable remained unchanged. Even though faculty are always concerned about the proliferation of items on a course evaluation instrument, these item expansions and the other modifications were considered necessary for the distance learning format.

The second adaptation was for a graduate level distance course in management in the School of Library and Information Science. This instrument can be viewed at the following URL: <http://luna.cas.usf.edu/~carey/aeq/amplis.htm>. The professor changed the three main areas within the attention variable to textbook and assigned readings, lectures and discussions, and project assignments; totally changed the relevance items to a library science context; changed the learning outcomes within the confidence factor to reflect the management course outcomes; and changed the intrinsic satisfaction variable to a library science context. These adaptations cast the AMP into course and professional contexts with which the graduate library science students could identify.

Research with the AMP has demonstrated that adaptations of the variables to fit the structure and nature of various courses at the undergraduate and graduate level did not change appreciably the psychometric characteristics of the data gathered using the instrument. The changes, however, did ensure that the data gathered were relevant for the formative evaluation of individual courses.

Tailoring the AMP to Your Course

For readers with limited experience in designing and developing theoretically based attitude assessment instruments, we have included a template to guide your initial attempts. The template can be accessed at the following URL: <http://luna.cas.usf.edu/~carey/aeq/amptemplate.htm>. The first column of the template contains the ARCS factors with their definitions; the second column provides space for you to convert the definition for your course context and students. The third column contains as prompts the categories and items within each factor from the original AMP. The fourth column provides space to identify category and item counterparts from your courses.

Within the attention factor, you are guided to identify main features of your instruction that should attract and maintain students' attention. For each main feature, you are prompted to identify its key facets. For the relevance factor, you are prompted to identify the personal and professional aspirations of your students and then your course outcomes intended to support those aspirations. Related to the confidence factor, you are prompted to identify the three or four major course outcomes, including cognitive, motor, and affective, and then name a couple of key elements within each of these outcomes. Finally, within the satisfaction factor, you can use the template to identify ways your course should be personally rewarding to your students. In this arena, take care not to name extrinsically satisfying rewards such as grades, certificates, or degrees, and focus instead on intrinsic satisfaction, e.g., personal effort, new potential, enhanced self-worth, and so forth.

Creating and using an Academic Motivation Profile tailored to key features of your courses will provide you with formative evaluation information about your students' motivations for learning that is not currently available from typical student rating forms for a course or instructor. Information from your own AMP will make a positive addition to assessment information you gather related to student attitudes in your courses. It will also help to ensure that your assessments are learner-centered.

Conclusions

Systematic monitoring of instructional and learning effectiveness for course refinement will require different types of course evaluation instruments. Monitoring students' achievement and academic motivation related to specific course aspects and outcomes provides a promising, systematic, data-driven system for tracking course effectiveness, supporting instructional refinements, and documenting the process for course and program reviews. Instructional designers have used academic motivation theories for forty years to enhance instructional effectiveness (Dick, Carey & Carey, 2001), and there is no reason why these theories cannot be used by university faculty to monitor course impact and identify areas for improvement.

References

Carey, J. O. (1998). Students' perceptions of motivation, affiliation, interaction, and practice/feedback in traditional and Internet-based course delivery. Paper presented as part of the Teaching Methods SIG symposium Transitions in Teaching Methods: Research Reports and Observations at the Annual Meeting of the Association for Library and Information Science Education, New Orleans, LA.

Carey, J. O., Carey, L. M., Gregory, V. L., & Wallace T. L. (1999). A report of studies of distance learners' academic performance, attitudes about technical and pedagogical aspects of web delivery, and use of information sources and services. Paper presented at the 1999 American Association for Higher Education Assessment Forum, Denver, CO.

Carey, L. M. (1990). Development and validation of the Academic Motivation Profile. Paper presented at the annual meeting of the Florida Educational Research Association, Tallahassee, FL.

Carey, L. M., Carey, J. O., Dedrick, R. F., Wallace, T. L., Kushner, S. N. (1994, November). Students' evaluations of courses: What do they mean? Paper presented at the annual meeting of the Florida Educational Research Association, Tampa, FL.

Carey, L. M., Carey, J. O., & Pearson, L. C. (1992, April). A comparison of students' initial expectations for a course and their end of course evaluations. Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA.

Carey, L. M., Dedrick, R. F., Carey, J. O., & Kushner, S. N. (1994). Procedures for designing course evaluation instruments: Masked personality format versus transparent achievement format. Educational and Psychological Measurement, 54,134-145.

Carey, L. M., Wallace, T. L., Thompson, K. A., Vizcain, D., Dedrick, R. F., & Ferron, J. M. (1997, November). Multiple Studies Examining the Board of Regents Course Evaluation Instrument. Symposium conducted at the Annual Meeting of the Florida Educational Research Association, Orlando, FL.

Covington, M. V. (1998). The will to learn: A guide for motivating young people. Cambridge, United Kingdom: Cambridge University Press.

Crittendon, K. S. & Norr, J. L. (1973). Students' values and teacher evaluation: a problem in person perception, Sociometry, 36 (2), 143-151.

Dedrick, R. F., Carey, L. M., Carey, J. O., Wallace, T. L., Greenbaum, P. E., Ferron, J. M., & Kushner, S. N. (1995). Changes in students' attitudes about the relevance of an undergraduate course in measurement.' A growth curve analysis. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA.

Dick, W., Carey, L. M., & Carey, J. O. (2001). The systematic design of instruction (5th ed.). New York: Addison Wesley Longman.

Ferron, J. M., Dedrick, R. F., & Carey, L. M. (1994). Modeling an attitudinal sequence for students enrolled in an undergraduate course. Paper presented at the annual meeting of the Florida Educational Research Association, Tallahassee, FL.

Finaly, E. & Neumann, Y. (1985). The measurement and meaning of students' satisfaction with instruction. Journal of Instructional Psychology, 12 (1), 11-18.

Keller, J. M. (1987a). Development and use of the ARCS model of instructional design. Journal of Instructional Development, 10 (3), 2-10.

Keller, J. M. (1987b). Strategies for stimulating the motivation to learn. Performance and Instruction, 26 (8), 1-7.

Keller, J. M. (1987c). The systematic process of motivational design. Performance and Instruction, 26 (9), 1-8.

Keller, J. M. & Suzuki, K. (1988). Use of the ARCS motivation model in courseware design. In D. H. Jonassen (Ed.), Instructional designs for microcomputer courseware. Hillsdale, NJ: Erlbaum, (pp.401-434).

Kushner, S. N., Carey, L. M., Dedrick, R. F., & Wallace, T. L. (1995, April). Preservice teachers' beliefs about the relevance of teacher education coursework and their confidence in performing related skills. Paper presented at the Annual Conference of the American Educational Research Association, San Francisco, CA.

Lambert, N. L. & McCombs, B. L. (Eds.). (1998). How students learn: Reforming schools through learner-centered education. Washington, DC: American Psychological Association, (pp.351-473).

Pearson, L. C. (1992). The construct validation of a course evaluation instrument based on Keller's ARCS Model of Academic Motivation (Doctoral dissertation, University of South Florida, 1992). Dissertation Abstracts International, 53-03A, 0784.
Lou M. Carey, University of South Florida
Tary L. Wallace, University of South Florida
James O. Carey, University of South Florida


Dr. Carey, Professor, Department of Educational Measurement and Research, teaches courses in evaluation and classroom measurement <careyl@typhoon.coedu.usf.edu>. Wallace, Instructor, Department of Educational Measurement and Research, teaches courses in classroom measurement <twallace@tempest.coedu.usf.edu>. Dr. Carey, Assistant Professor, School of Library and Information Science, teaches courses in instructional systems and technology <carey@chuma1.cas.usf.edu>.
COPYRIGHT 2001 Rapid Intellect Group, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2001, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Carey, James O.
Publication:Academic Exchange Quarterly
Geographic Code:1USA
Date:Mar 22, 2001
Words:4254
Previous Article:Linking Outcomes Assessment with Teaching Effectiveness and Professional Accreditation.
Next Article:Assessment of Learning Preferences In a Compressed Video Distance Learning Environment.
Topics:


Related Articles
EDITORIAL.
Getting Real: Implementing General Education Assessment that Works.
Why interact online if it's not assessed?
The accuracy of self-efficacy: a comparison of high school and college students.
Attitudes toward mathematics inventory redux.
Self-regulation in a computer literacy course.
Test and study worry and emotionality in the prediction of college students' reasons for drinking: an exploratory investigation.
Motivation and learning strategies of students in distance education.
The effect of motivation, family environment, and student characteristics on academic achievement.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters