Challenges to effective assessment of learning.
Assessment of student learning has received much attention in the last decade. Many experts have articulated the value of assessment, and many critical decisions in higher education are now based at least partially on effective assessment. While the value of assessment is clear, the implementation and maintenance of an effective assessment process is often challenging. This paper identifies some of these challenges and discusses potential solutions.
The importance of systematically and continuously assessing student learning in higher education has been widely discussed. Seybert (2002) notes that assessment of student learning has become a major issue for higher education for multiple reasons, including accreditation, accountability, and performance indicators for funding. Angelo and Cross (1993) define assessment in this context as the multidimensional process of appraising the learning that occurs in the classroom. Pascarella and Terenzini (1991) note that the enhancement of student learning is central to the mission of colleges and universities. Clearly, systematic, meaningful assessment of student learning helps an institution determine if its core mission--the education of students--is being achieved.
Although the jargon of assessment varies somewhat from one expert to the next, the underlying concept is simple. Assessment of student learning means answering several important questions. What are our students expected to learn? What information can we collect to determine if that learning is occurring? What decisions can we make about our program after we review that information? Banta, Lurid, Black, and Oblander (1996) note that assessment is not an end in itself but a vehicle for continuous improvement of the educational process.
Norton and Dudycha (2001) note that systematic assessment of student learning can be valuable to programs and to institutions in multiple ways. It can provide information that is essential for continuous improvement of academic programs. It is also essential for an institution's accountability to various stakeholders, including prospective students and parents, accrediting agencies, and administrators who make decisions about resource allocation. Yet despite the potential value of assessment, implementing and maintaining a viable assessment process is often fraught with challenges.
The purpose of this paper is to identify some of those challenges and discuss ways to surmount them. These challenges reflect concerns commonly expressed by both faculty and administrators during the author's experience as Director of Academic Assessment at a regional public university in the Midwest, United States. The author, also a Professor of Human Resource Management, has worked extensively with administrators and with faculty from across the university to discuss, design, implement, and monitor assessment efforts.
"How do we know what our goals should be?" Determining the learning goals is at the heart of systematic assessment of student learning. Decisions about how to assess learning--e.g., what measurement approaches should be used--are secondary to decisions about what the learning goals should be. Oftentimes, however, faculty are uncertain about what their program's learning goals should be, and how learning outcomes can be identified. In some cases, a learning outcome was chosen largely because that outcome could be measured. This is obviously contrary to the essence of assessment. Ideally, the focus should be on what students ought to be learning and then on how to assess it. This question was asked frequently on the author's campus, and it was clear that some faculty were unsure about how to determine the goals of their programs. In reality, faculty are often the best judges of what should be addressed in their programs. Therefore, it is helpful for faculty to be familiar with some of the ways to identify learning goals.
In some cases, the program may be accredited by an external agency, and often such agencies have specific requirements about program/curriculum contents. For example, many business programs seek to earn and/or maintain accreditation from the AACSB International (the Association to Advance Collegiate Schools of Business). Newly-adopted AACSB standards include both specific learning outcomes and more general learning outcomes. While relying solely on an accrediting agency for identifying learning standards may be too formulaic (cf., Dill, 2000), it may at least provide some guidance in terms of what a program should address. [Note that virtually all accrediting agencies, whether program-specific or institution-wide, require systematic assessment of student learning. Indeed, Hatfield & Gorman (2000) note that the requirements of accrediting agencies are one of the reasons that assessment has gained additional urgency.]
Another way of identifying learning goals is to review programs at other universities. Faculty in a physics program, for example, may compare their program (course offerings, prerequisites, policies, etc.) to physics programs at other universities. The faculty may then decide to use similar goals or they may decide to use at least some unique goals. Both approaches have value. In the former, the similarity ensures that stakeholders will recognize that the program is consistent with offerings at other universities. This may be important in fields of study such as Accounting, where a student may eventually wish to pursue professional certification, which depends partially on completion of a standardized course of study. In the latter, the uniqueness may help create a niche for a program--e.g., "here is what is special about this program of study at our university as compared to other universities."
Still another way to identify learning goals is to determine what external stakeholders (e.g., employers that recruit students) consider valuable. The rationale is that if prospective employers need employees with specific skills, then the program can be designed to incorporate those skills. Faculty from a program might discuss desirable learning goals with prospective employers via surveys or focus groups. Norton and McArthur (1995), for example, described a process where faculty from a business program collected performance appraisal forms from approximately two dozen area employers. Those forms were carefully reviewed by a faculty committee to identify skills that were consistently mentioned by employers. The resulting list of eleven learning outcomes became an important part of the program's curriculum.
While this is not an exhaustive list, it may help address the original question. There is no "one size fits all" set of program goals, and there is no one best way of identifying appropriate learning goals. Indeed, a meaningful discussion of how to identify appropriate goals/outcomes for a program can be an immensely valuable part of the assessment process.
"Aren't grades meaningless for 'real' assessment?" The essence of this question is what grades in a course really mean. In the author's experience, faculty in many disciplines routinely express concern that grades are not a valid measure of student learning. Whether this concern is legitimate depends largely on the instructor's--and the program's--philosophy of grading.
Too often, grades are merely normative--e.g., a student receives a passing mark or a grade of "excellent" simply because that student has outperformed a certain percentage of the other students in the course. While normative grading may be acceptable for some purposes--e.g., for the instructor to provide a final mark for each student--it is much less helpful for other purposes. For a prospective employer, for example, knowing that a student received a mark of "outstanding" in a course because that student was in the top ten percent of a class says little or nothing about what that student actually knows or can do. Indeed, the tendency to grade normatively may be a reason why grades are often viewed with suspicion in assessment. In many of the author's discussions about assessment, it was obvious that many faculty were convinced that "real" assessment had to be burdensome and time-consuming and had to involve activities totally separate from the grading process, thus creating significant additional work for faculty and implying that the course grades themselves provided no relevant information about student learning. In some cases, in fact, faculty added additional work that was not counted towards the course requirements, such as an extra writing assignment to assess students' writing skills.
Before concluding that grades have little or no value in assessment, however, another perspective may be helpful. The author has often reassured faculty that they have always assessed student learning; an instructor has to do so in order to give students grades. Also, the work a student actually does as part of the requirements for a course is probably the most realistic information about student learning in that course. A student may have little if any motivation to do his/her best on an exercise that will have no bearing on grades or degree completion. Rather than ignoring grading as a measure of student learning, the emphasis could be on ensuring that the grades themselves are meaningful. On many campuses, letter grades are used. Generally, a grade of A connotes outstanding performance and a grade of F connotes failure. Faculty are typically required to provide a final letter grade for each student in a course. Rather than assigning such grades normatively (e.g., a grade of A means that a student was better than everyone else in the class), it would be helpful to articulate the actual meaning of a specific grade. For example, if the instructor has identified ten specific goals for a writing course, he/she may then decide that only students who demonstrate proficiency on all ten goals will earn the grade of A. Students who demonstrate proficiency on eight or nine goals might earn the grade of B, and so on.
This approach to grading could serve several purposes. First, it "piggybacks" on the fact that faculty are grading student performance anyway, thus reducing the need for a great deal of extra effort on the part of the instructor. Also, as noted above, it involves work that the student is actually doing for that course rather than artificial exercises outside the context of course requirements, which may not be a very realistic picture of student learning. Finally, it would provide much more meaningful information. A prospective employer, for example, would know that a student who earned an A in a writing course had mastered specific writing skills.
"Aren't we required to use standardized testing for 'real' assessment?" This question is somewhat related to concerns about grading . Many faculty believed that the only type of assessment that counted was the administration of a standardized test of some sort. Certainly, there are standardized tests available for a variety of purposes. Whether or not a standardized test is an appropriate means of gathering information for assessment depends primarily on the fit between the content of the test and the learning it is meant to assess. In some cases, the learning goals (which, as noted above, should be the first priority in assessment) can be meaningfully assessed by a standardized test. In other words, there may be a test that matches the learning goals, in that it adequately assesses those goals and does not include additional, unrelated goals. In that case, use of a standardized test could, under the right circumstances, make sense.
Another important consideration in standardized testing is the logistics of test administration. Faculty should be cognizant of how the testing might be perceived by students. If it is perceived as an extraneous exercise unrelated to their coursework or degree completion, their motivation to perform well might be affected. Test results, then, are unrepresentative of the actual classroom performance and learning. On the other hand, if a standardized test can be administered in a way that the results are important to the students--e.g., it will become part of a grade in a course--it may provide valuable information about student learning.
"Don't we have to make changes in our curriculum in order to prove that we really did assess student learning?" As noted above, assessment is a process, whereby we ask three questions. What are students meant to be learning in this program? What information can we collect to determine if that learning is occurring? What decisions do we make about our program as we review that information? Obviously, continuous improvement of a program is an important reason to assess student learning. And change may be called for after faculty review information collected about learning goals. Faculty may decide to add courses, delete courses, change the way a course is taught, change the sequencing of courses, change course prerequisites, etc., after reviewing and discussing information collected in assessment.
There may be times when faculty review the information collected about learning goals and conclude that the program is accomplishing what it is meant to accomplish. In that case, arbitrarily changing something is contrary to the real purpose of assessment. The idea is not change merely for the sake of change, but engaging in assessment on an ongoing basis. This proved to be a big relief to faculty from several different programs on the author's campus. In each case, the program faculty had systematically identified appropriate learning goals and had collected information about whether those learning goals were being achieved. Though they concluded that the goals had, indeed, been achieved, they felt that they had somehow failed in the assessment process because they had decided not to change anything in their respective programs. Not only had they not failed; their efforts represented the essence of assessment. As Walvoord (2004) points out, the ultimate value of assessment comes from "closing the loop."
"We'd like to do more for assessment, but we don't have the resources." This is a common, ongoing concern and, in view of the budgetary crises affecting many universities in the United States, is often a reflection of a lack of administrative resources. In many programs, faculty express the wish to use multiple sources of information in their assessment efforts, including surveys of alumni, focus groups with alumni and other important stakeholders such as area employers, standardized testing where appropriate, etc. Using multiple sources of information in assessment is widely recommended by assessment experts (cf., Angelo & Cross, 1993) Often, however, faculty are concerned that they lack the time, expertise, and/or administrative support for such activities.
Ideally, this concern can be addressed by making assessment a high priority in terms of university resources. Availability of funds for activities such as alumni surveys or focus groups with outside stakeholders will allow collection of information from multiple sources. It is also important to ensure that expert support for assessment is available internally. For example, it may be helpful to identify key individuals on campus who will receive specialized training in assessment and who can then take that training and expertise back to their institutions to assist faculty from various programs in their own assessment efforts. This internal expertise can be especially important for assessment activities such as surveys or focus groups. In the author's experience, faculty often feel that they themselves are not adequately trained for such activities. Therefore, the ready availability of experts who do have appropriate training would allow faculty to focus on their own role in assessment.
Another potential solution to the challenge of scant resources is to leverage available technology where possible. For example, one program at the author's university designed a web-based system for managing assessment data. In this case, the D2L--Desire-to-Learn--package was used. This package is widely available and in fact is already in use at many campuses as a teaching tool.
"Why should we do this? What's in it for us?" This is a common and very understandable question, especially if faculty view assessment of student learning as simply another chore in addition to the existing demands of teaching, research, and service. There are multiple ways to address this concern. The simplest is to communicate regularly with faculty so they are encouraged to engage in assessment and rewarded for doing so. On the author's campus, we developed a very simple way to do this. Each academic year, the Director of Assessment and the appropriate dean select approximately a dozen different programs for assessment visits. At these visits, the program faculty discuss their assessment efforts, including the learning outcomes and how they were identified, information collected about those outcomes, and any decisions that have been made about the program as a result. These meetings allow the university to closely monitor assessment efforts. They also provide an opportunity for faculty to raise questions and concerns about assessment.
Another response to this concern is to continually educate faculty about the benefits of assessment for ongoing improvement of their program. It should not be viewed as something done only to satisfy external stakeholders such as accrediting agencies. Likewise, it need not be viewed as simply another burden dreamed up by administrators. Regardless of the presence of accrediting agencies or the demands made by administrators, systematically assessing learning is valuable in its own right. Here again, regular communication with program faculty is essential. On the author's campus, regular assessment forums are scheduled, where faculty from all disciplines are invited to participate to discuss issues related to assessment. Also, the university maintains an assessment "resource room," which houses literature on assessment, samples of assessment plans, Internet resources, etc.
Finally, it is important for administrators to formally acknowledge the efforts made by faculty in assessment. Certainly, the process works best when multiple stakeholders in an institution are actively involved. Ultimately, however, much of what is valuable about the assessment process comes directly from faculty. They typically have a great deal of input into identifying appropriate learning goals, and as noted above, determining what a program is meant to accomplish is at the heart of the assessment process.
In addition, faculty on many campuses are largely responsible for collecting the information that will help answer the question of whether the learning goals are being met. On the author's campus, assessment efforts have become part of the university's administrative and reward structure. At the university level, information about assessment of learning is considered in resource allocation decisions. For example, if a program requests additional faculty, a university-level committee considers that program's assessment efforts as part of making a decision. Also, an individual faculty member's contribution to assessment efforts for his/her respective program is formally reviewed during the biannual merit review process to ensure that faculty are evaluated partially on their assessment efforts.
In sum, careful, systematic assessment of student learning is essential in determining whether a university's core mission is being accomplished. Braathen and Robles (2000) contend that, rather than view assessment as something done to instructors or to students, it should be viewed as a vital component of effective instruction. Although such assessment can create numerous challenges, these challenges can often be addressed by careful communication, adequate resources, and overall administrative support. Given the potential value of a viable assessment program, it is worth understanding and confronting these challenges.
Angelo, T. A. & Cross, K. P. (1993). Classroom Assessment Techniques: A Handbook for College Teachers. (2nd Ed.) San Francisco: Jossey-Bass.
Banta, T. W., Lund, J. P., Black, K. E., & Oblander, F. W. (1996). Assessment in Practice: Putting Principles to Work on College Campuses. San Francisco: Jossey-Bass.
Braathen, S., & Robles, M. (2000). The importance of assessment in Business education. In J. Rucker (ed.) Assessment in Business Education National Business Education Yearbook, #38. Reston, Virginia: National Business Education Association. Dill, D. D. (2000). Is there an academic audit in your future? Change, July/August, 35-41.
Hatfield, S. R., & Gorman, K. L. (2000). Assessment in education--the Past, present, and future. In J. Rucker (Ed.) Assessment in Business Education National Business Education Yearbook, #38. Reston, Virginia: National Business Education Association.
Norton, S. M. & Dudycha, A. L. (2001). Accountability and integration In assessment: Identifying Learning Goals. Academic Exchange Quarterly, Spring, 38-42.
Norton, S. M. & McArthur, A. (1995). Identifying and integrating managerial Skills in a business curriculum. Midwest Academy of Management Annual Conference, St. Louis, April.
Pascarella, E. T. & Yerenzini, P.T. (1991). How College Affects Students: Findings and Insights from Twenty Years of Research. San Francisco: Jossey-Bass.
Seybert, J. A. (2002). Assessing Student Learning Outcomes. New Directions for Community Colleges, #117, Spring, 1-12.
Walvoord, B. B. E. (2004). Assessment Clear and Simple. San Francisco: Jossey-Bass.
Sue Margaret Norton, University of Wisconsin--Parkside
Sue Margaret Norton, Ph.D. is a Professor of Human Resource Management and the Director of Academic Assessment at the University of Wisconsin--Parkside
|Printer friendly Cite/link Email Feedback|
|Author:||Norton, Sue Margaret|
|Publication:||Academic Exchange Quarterly|
|Date:||Sep 22, 2006|
|Previous Article:||Hate speech: implications for administrators.|
|Next Article:||Cultural orientations and collaborative learning.|