Printer Friendly

The learning activities questionnaire: a tool to enhance teaching.

IMPROVING THE QUALITY of teaching represents one of the central foci in social work education. Yet, there are limited tools to evaluate and improve teaching. The Student Evaluation of Teaching (SET), which is commonly administered at the end of each course universities, represents the most prevalent technique of teacher evaluation in today's social work schools (Steiner, Gerdes, Holley, & Campbell, 2006) as well as the most researched instrument in higher education (McKeachie & Kaplan, 1996). The SET is found to be a reliable and relatively valid instrument (Marsh & Roche, 1997); however, there are some equivocal findings regarding its tendency toward bias (Steiner et al., 2006). For example, there is mild, sometimes conflicting evidence suggesting that SET ratings are affected by grade leniency (Marsh & Roche, 1997), race (Smith, 2007), instructor rank (Blackhart, Peruche, DeWall, & Joiner, 2006), class size, and student academic competency (d'Apollonia & Abrami, 1997). Still, students are generally viewed as valid sources of course evaluation (Falchikov, 1995; McKeachie & Kaplan, 1996), and the SET is commonly accepted as an adequate measure for evaluating teaching in higher education. Although the SET is embraced as an effective way of evaluating teaching, it may profit from supplemental tools to both assess teaching more comprehensively and to facilitate instructor improvement.

Comprehensiveness of the SET

The SET typically focuses on instructor behaviors such as classroom delivery (e.g., challenging, stimulating, enthusiastic), teacher interaction with students (e.g., approachable, respectful, helpful feedback), and global qualities (e.g., would you recommend this course/ instructor?). These variables are central to teaching and course quality, yet, other variables are similarly important. For example, Steiner et al. (2006) point out the importance of supplemental teaching tools such as online resources (e.g., discussion board) or visual aids (e.g., PowerPoint); however, they note the dearth of literature and evaluation instruments in these areas. Social work students typically spend a significant proportion of their course time on learning activities outside the classroom such as readings and assignments. Yet, the SET does not assess these activities. Increasing the comprehensiveness of course evaluation can expand potential areas of change and thus enhance opportunities for improved learning.

Capacity to Improve Teaching

There is some evidence, both testimonial and empirical, suggesting that the SET can significantly improve teaching (Marsh & Roche, 1997; Stevens & Aleamoni, 1985), particularly for ineffective teachers (Cross, 1999). However, such support is generally contingent on whether it is coupled with appropriate consultation or some form of external assessment, training, and monitoring (Cross, 1999; Marsh & Roche, 1997). The use of the SET evaluation scores in tenure and promotion decisions is still common; yet, some experts warn that its employment without additional evaluation methods may even hurt teaching (e.g., Armstrong, 1998). In short, the SET has been commonly accepted as an instrument to evaluate teaching. However, its capacity to improve teaching is questionable, unless it is coupled with a supplemental tool.

Related to this, a possible limitation of the SET is that it tends to focus on general teacher attributes, which may impede clear direction for change. There is sound support for the importance of specificity in teaching evaluation (Wilkerson & Lang, 2004) and improvement (Boyd, 1989). This is supported by Behavioral Theory, which posits that the capacity to change is enhanced when the problem and goal behavior are specified (Gambrill, 1977). For example, an instructor who receives a SET rating that is low in stimulating student interest (i.e., a nonspecific variable) may be unclear about whether changing class exercises, readings, or discussions will improve student interest. In fact, a low rating may reflect the professor's lack of understanding about what constitutes stimulating teaching, and he or she might even make the wrong change in an effort to improve. In contrast, a rating on a specific variable, such as a reading or exercise, which is included in the LAQ, provides clearer direction for change. In summary, if an instructor evaluation leads to clear and specific direction for change, the potential for improving teaching is enhanced (Emerson & Records, 2007; Keeley, Smith, & Buskist, 2006; Wilkerson & Lang, 2004).

The Learning Activities Questionnaire (LAQ)

The LAQ as a Supplement to the SET

In contrast to the SET, which focuses on instructor behaviors, LAQ focuses more on student course-related tasks, many of which are completed outside of the classroom. For example, the SET may focus on the extent to which the instructor listens to or is interesting to the student, whereas the LAQ focuses on the quality of a reading or exam. In a sense, the SET is teaching-focused and therefore more professor-centered, whereas the LAQ is more learning-focused and student-centered. Each covers a different yet important facet of instruction. Of course, instruction is a systemic process that involves the teacher, the student, and the interaction between the two (Palmer, 1998). The SET combined with the LAQ would address the student-professor interaction more thoroughly and thus expand what is evaluated and potentially targeted for change.

Another area in which the LAQ differs from and therefore supplements the SET is its focus on concrete specific variables that facilitates clearer direction for change. For example, instead of focusing on whether the instructor enhanced the student's interest in the subject area, as targeted in the SET, the LAQ evaluates the student's evaluation of a learning activity, such as a specific assignment. Professors who wish to improve their teaching may find it easier to change if the evaluation targets specific areas, such as changing a reading, as opposed to more general areas, such as enhancing student interest in a subject area.

Although the SET has certain strengths as an evaluation tool, it also has limitations, which can be addressed through the use of the LAQ as a supplemental tool. The LAQ could augment the SET by expanding the comprehensiveness of one's course evaluation, providing clearer, more specific direction for instructor improvement.

Instrument Description

The LAQ typically had between 25 and 30 items, depending on the number of readings and other specific learning activities we wished to evaluate during a given semester. Using a 5-point Likert scale ranging from 1=poor to 5=excellent, along with a "don't know" option, the respondent rated each separate item. Learning activities consisted of specific readings (e.g., Beckman, 1994), presentations, exams, exam reviews, class exercises, structured discussions, handouts, papers, and other assignments. Alterations were made in learning activities from year to year based primarily on student feedback provided in the questionnaires.

The LAQ was simple and quick to construct, taking about 15-20 minutes to create each year. Using a template (see Appendix), the author cut and paste readings from the syllabus then added the assignments, exercises, exams, exam reviews, and other learning activities.

Administration of the LAQ

At the end of the course, students in each section anonymously completed the LAQ without the professor present. Students were told that their ratings and feedback would be heavily considered when we revise the course before the next time it is offered.

The LAQ as a Tool to Enhance Learning and Teaching

Professors met to review LAQ ratings twice each year. The ratings reviewed consisted of a table of summary scores along with the ranking of each item based on mean scores. The bottom third of the scores were highlighted and represented the primary focus for change. The first of the two annual professor meetings occurred within 2 weeks after teaching the course. Along with scores and rankings of all sections combined, professors received their own section scores, which were used as a basis to develop suggestions for change. Most central to these meetings were recommendations made concerning whether learning activities should be handled differently (e.g., providing questions the students should consider before they complete a challenging reading) or dropped all together. The second professor meeting occurred a few weeks before the course was taught again and focused on the preparation of the new syllabus. The suggestions for change from the previous meeting along with summary LAQ scores of all sections combined were distributed. Professors typically accepted the suggested changes from the previous meeting. The second meeting generally focused on operationalizing the suggested changes. Actual changes in learning activities were incorporated in the LAQ that would be administered at the end of the semester. Professors were also encouraged to add items unique to their own class (e.g., exercises, field trips, or role plays), and their LAQ was independently revised accordingly.

With regard to human subjects approval, this investigator contacted the university IRB chair to inquire about whether the summary scores could be reported in a journal article. The chair responded that the ratings would be considered archival data and could therefore be reported in a journal article without written consent. He also noted that student ratings were anonymous and responses in no way provided information about the identity of the respondent.

LAQ Summary Scores

The LAQ was employed by six professors teaching 215 students in 11 different sections of a generalist practice course over a 5-year period. The respondents were 1st-year, fulltime master's students from a private southern university taking a required introductory generalist practice course. As a result of the relatively lengthy period and number of students, sections, and instructors, the author and other professors were afforded a considerable amount of data as well as time to reflect on how the instrument could be best employed to improve teaching and enhance learning.

On a 5-point Likert scale, the grand mean for all items was relatively high at M=3.89 (SD=.45), with the range of section means from 3.69 to 4.23. The overall readings mean (i.e., mean across all sections and all years) was also high at M=3.87 (SD=.48), with a range of section means from 3.70 to 4.17. The overall exam mean was M=3.79 (SD=.92), and the range of section means was from 3.28 to 4.25. The overall exam review mean was M=4.08 (SD=.96), and the range was quite broad at 2.93 to 4.65. The overall exercise mean was M=3.97 (SD=.80), and the related range of section means was 3.89 to 4.23. Listing of all the descriptive statistics across all sections and years go beyond the needs of this article; however, readers can access the table containing this information as a PDF file on the Tulane University website (Ager, 2010).

Professor Discussions About Low-Rated Learning Activities

Procedures. As noted earlier, the bottom third of the item ratings received the most scrutiny. Over the years, a system for reviewing these items emerged and common criteria were developed. During the 1st year the LAQ was used, the majority of low-rated items were replaced. This was likely due to the fact that the learning activities had not previously received careful scrutiny, which resulted in an accumulation of several that were of questionable quality. Based on the author's subjective opinion, the learning activities yielded their sharpest improvement following that 1st year. Anecdotally, the improvement might be best reflected in a decision made about the course textbook, which constituted about 80% of the readings. During the 1st year the LAQ was used, the textbook was rated last. Because of this, we switched textbooks for the 2nd year and the rating of the new textbook for the same professor was high. As most professors will acknowledge, changing a textbook involves considerable extra work, and the reason for such a change often needs to be compelling. The author speculates that the textbook would not have been changed had it not been for its low rating.

After the 1st year, items were more carefully scrutinized and less likely to be dropped. Other reasons for low scores were considered, which often led to retaining the learning activities yet changing how they were implemented. In reviewing the numerous decisions considering whether or not to retain specific items, common reasons for why activities appeared to receive low scores emerged as well as solutions to how they might be improved. These problems and solutions fell into the following few categories.

Major insights and related recommendations gleaned from professor discussions. The following represents some of the major ideas generated during professor discussions concerning the LAQ scores. Most were derived from discussions about readings because they represented the vast majority of learning activities shared across sections. These insights and related recommendations are intended to assist professors who use the LAQ.

Based on discussions, we speculated that some readings were rated low because of an old publication date. This may have related to not keeping up with the current literature or new trends. However, a reading with an old publication date was sometimes retained because it was a seminal article. Sometimes the article was the best one written on the subject--it might have been relevant and extremely well-written. When comparing ratings across different professor sections on such an article, we found that students rated these readings higher and probably appreciated it more when the professor explained the reason it was included on the syllabus. For example, students presumably gained a deeper appreciation of a seminal article if they were helped to understand its importance in the literature or its historical context.

We surmised that some readings may have been rated low because they were challenging. Professors who received higher ratings on such readings tended to forewarn students that it was challenging. In other cases the professor provided questions or ideas to guide the reader. Higher scores on such items were also associated with professors who used lecture and discussion to carefully describe the essential information from these readings. On the other hand, some professors saw such handling of challenging material as coddling the students and believed that a certain amount of discomfort was good preparation for similar kinds of challenges students would face in the world.

Another insight from our talks was that readings sometimes received low scores because they needed to be better integrated within the class. For example, professors might discuss them in lecture but not test their content during exams or draw from them in assignments. When readings are not well-integrated within the classroom, students may consider them unimportant and rate them low. If the problem of integration is considerable, it is possible that students will not complete the readings.

Of course, sometimes a reading was rated low because the students just did not like it--they presumably found the material or its presentation uninteresting. We searched for better written material when this situation arose. Replacement for any low-rated reading was dependent on whether a superior chapter or article covering the same content could be located, and if not, whether we could deviate somewhat from the content covered so as to access a broader choice of replacements.

Reflections on Its Use

The LAQ provides a simple way to evaluate and alter specific learning activities within a course. Scores on the LAQ in this course were good but not excellent, providing the opportunity for improvement. The grand mean, overall mean, and separate mean scores were generally in the above-average range (rating close to 4), with a few isolated separate class section means closer to a "medium" rating (i.e., rating of 3).

Scores were also simple to tally. However, this investigator chose to use the item rankings rather than means to identify learning activities with potential problems As noted, section means ranged from 3.69 to 4.23, suggesting that a score of 4 on a given learning activity in one section was not necessarily comparable to the same score in another section. Ratings were presumably affected by variables other than the learning activity, such as instructor attributes or class composition. Consequently, rather than choosing items below an arbitrary mean (e.g., 3.5) as the cutoff for what activities should be targeted for change, we chose items with scores ranked in the bottom third.

Benefits

Consistent with behavioral theory, learning activities were concrete and specific thereby providing clear direction for change as a means to ultimately improve learning. LAQ ratings informed and facilitated professor discussion about how to improve teaching. In contrast to the SET, which addressed general teacher performance and interaction, the LAQ focused on concrete activities associated with the class, such as readings, exercises and assignments, making it easier to identify specific variables to target for change.

The LAQ focused on characteristics of learning activities that were typically ignored in course evaluations and thus expanded targets for change. For example, the LAQ focused on learning that occurred outside the classroom, such as projects, written assignments, and readings. Notably, these activities typically represented the largest proportion of time students spend on a class. In the course evaluation literature, they represent an area typically overlooked for learning improvement.

Using student feedback as a primary method for gaining information represents a strength in course evaluation. First, they are considered valid sources of course evaluation (Falchikov, 1995; McKeachie & Kaplan, 1996). Furthermore, student involvement in evaluating courses empowers and gives them a sense of responsibility for the curriculum, which reinforces what they are taught. As professors, we want students to evaluate their practice (Steiner et al., 2006) and to empower clients. What better way to teach such concepts than to model them ourselves.

Student evaluation of teaching is related to another LAQ advantage--student involvement in their learning, or active learning. As a supplement to the SET, which focuses on instructor teaching behaviors, the LAQ emphasizes student behaviors that enhance learning such as reading, completing assignments, and engaging in exercises. When learning activities are exciting and vital, the student will be more inclined to engage in the learning process, which may facilitate their growth as developing social workers and lifelong learners. Furthermore, involvement in what one learns has been found to enhance student achievement, increase retention, and advance critical thinking skills (Steiner et al., 2006).

Changing readings or assignments involves additional work for the professor--new preparation, alteration in class discussions, and alterations in exams or assignments. Without adequate evaluation tools, professors presumably eliminate learning activities based on anecdotal experiences or the strong opinion of a student or two. Such negative evaluations may not represent the opinions of the class. The LAQ provides a more systematic method of gathering information and making decisions about learning activities based on broader input from students.

Problems and Potential Solutions

The LAQ as an empirical tool. Although it was not the author's intent to present the LAQ in this article as a research instrument, its usefulness as an evaluative tool deserves consideration. For this to happen, several limitations need to be carefully addressed. For example, little if any support remains for whether the LAQ measures what it is intended to measure and the extent to which the instrument is consistent across time and situations. It would follow that the LAQ's reliability and validity need to be carefully evaluated. As suggested by the summary scores reported earlier, the LAQ has shown a broad range of mean scores across professors and across years. Presumably, ratings are likely affected and possibly confounded by variables other than the learning activity itself. This is not surprising given that the other teaching evaluations instruments, such as the SET, are also affected by variables outside of the teaching behaviors being measured such as grade leniency (Blackhart et al., 2006; Marsh & Roche, 1997), instructor rank (Blackhart et al., 2006), class size, and student academic competency (d'Apollonia & Abrami, 1997). In support of the LAQ, its items have clear face validity and therefore likely measure the item being listed. Furthermore, its demonstrated ability to assist professors in making informed decisions about the curriculum in 11 sections over a 5 year period provides some support for its usefulness. Nevertheless, such support remains anecdotal. Evaluating the psychometrics and effectiveness of the LAQ represents important subsequent steps in establishing its credibility and usefulness.

A further limitation relates to the LAQ's weighting of items. All LAQ items are weighted the same, even though some learning activities involve small investments of time and energy whereas others involve large investments. This is particularly evident with the readings in which the textbook, which accounted for the vast majority of the readings, was weighted the same as a single article. Consequently, an overall mean score for readings may not accurately represent the true contribution of that textbook. This is not as much a problem when the LAQ is used solely to improve learning activities in a professor's course. However, for a research instrument aimed at carefully measuring change in the quality of learning activities, the lack of weighting may pose problems. One could somewhat rectify this problem by having the students rate each chapter of the textbook separately.

Another limitation is the skewing of scores toward the higher end of the rating scale, leaving little distinction between high- and lowitem scores. Item variance is therefore limited and it may be difficult to determine whether a notable change has taken place following an intervention, even if statistical change is indicated. To address this limitation, one might change the benchmark for ratings. The current benchmark for rating learning activities is a 5-point Likert scale in which 1=poor and 5=excellent, which leads to a scoring at the higher range of the scale. Instead, students might rate the learning activities by comparing them with similar activities in other classes that semester. Consequently, a learning activity such as a reading might more commonly be rated as similar in quality to other readings (a rating of 3) with some much worse (a rating of 1) and some much better (a rating of 5). This may provide greater variability in scores, thus making the high versus low scores as well as improvements more discernible.

Problems in implementation. There are potential problems in implementing the LAQ in other schools of social work. One of the major procedures described for implementing the LAQ involved professors comparing scores and sharing ideas about how learning activities should be changed or altered. However, this opportunity was dependent on the professors sharing the same syllabus, which does not occur in all social work programs and with all courses, some of which have only one section. Without multiple sections and the professors sharing the same syllabus and therefore learning activities, one would lose the process of comparing scores and sharing ideas that led to identifying the source of problems and how they might be solved. Nevertheless, this author would argue that the LAQ can be employed by a professor independently. Such a professor reviewing low-rated LAQ items can still identify potential problems and develop solutions based on their own background in teaching.

The LAQ as a tool to improve learning and teaching. It is important to emphasize that instructor judgment needs to be employed when interpreting LAQ scores. For example, instructors need to be aware that the ratings may be partly a function of the type of learning activity evaluated--some activities may receive higher or lower ratings based partly on whether or not they are pleasurable. One would expect that a field trip or class exercise would, on average, receive a higher rating than an exam. Consequently, a score of 4 on a 5-point scale for a field trip may be cause for concern if typical scores on field trips are closer to 5.

Another limitation relates to the "not applicable" (NA) response option, which may mask relevant information. For example, it is likely that students frequently select the NA response when they haven't read an article or chapter. Failing to read an article restricts student learning. If such a problem is considerable, which is arguably not uncommon in social work programs, this will cause a chain reaction with regard to learning in related activities such as class discussions, assignments based on the readings, and exams. Learning about which readings are skipped can help the professor develop a better strategy for motivating students to read important material. It would follow that an additional response option for the readings might be "didn't read." It may be prudent to first conduct research to determine the reasons for the NA response, and then alter the LAQ accordingly.

The Likert scale rating, particularly for some learning activities, such as guided discussions, exercises, or assignments, is limited. An average rating of 3 may suggest that an exercise has problems. However, it fails to identify what might be changed or retained to improve the activity. Rather, the professor is left to speculate about what might help the activity or whether it should be dropped altogether. To address the restrictions of the LAQ rating system, a professor might employ additional methods of gather information such as the "minute feedback" (e.g., having students write a short note concerning what they liked and what they didn't like about a given learning activity).

Of course, the LAQ and SET together do not represent a comprehensive evaluation of teaching and learning, although the addition of the LAQ moves us in that direction. There are several other areas that could profit from evaluation. Expanding evaluation to include audiovisual tools such as PowerPoint presentations or online teaching techniques such as Blackboard may deepen our understanding of what is effective and what needs improvement. In-class evaluation or supervision by "master teachers," particularly early in one's teaching career, might enhance teaching competency. These are just a few methods, which, along with the LAQ, can further enrich learning, teaching, and evaluation. As we expand what and how we evaluate teaching and learning, we expand opportunities to improve.

APPENDIX

LAQ Template With Examples

Please rate the assignments listed below using the following rating system:

5 = Excellent

4 = Above average

3 = Average

2 = Below average

1 = Poor

DK = Don't know

Readings

--1. Hepworth, D. H., Rooney, R. H., & Larsen, J. (1997). Direct social work practice: Theory and skills (5th ed.). Pacific Grove, CA: Brooks/Cole. (Course textbook)

--2. Landon, P. S. (1995). Generalist and advanced generalist practice. In R. L. Edwards & J. G. Hopps (Eds.), Encyclopedia of social work (19th ed.), Vol. 2, pp. 1101-1108. Washington, DC: NASW Press.

Questions 3-20 for additional readings not listed here.

Other Assignments

--21. Final Presentation Assignment

--22. Final Exam Review

--23. Final Exam

--24. Integrative Paper

Class Exdercises and Handouts

--25. Group/class discussion on Cardinal Values of Social Work where vignettes were read and the class had to examine possible ways of addressing the problems (e.g., Mr. K. who said he accidentally broke his glasses, whereas, you heard they were broken when he was drunk)

--26. Field Trip to Odyssey House

--27. "Community Build" Exercise

--28. Handouts

Please note any further comments about the learning activities listed above.

--

References

Ager, R. (2010). Means, Standard Deviations, Ranks [Table]. Available at http://tulane.edu/socialwork/upload/Table-of-Descriptives.pdf

Armstrong, J. S. (1998). Are student ratings of instruction useful? American Psychologist, 53(11), 1223-1224.

Beckman, L. J. (1994). Treatment needs of women with alcohol problems. Alcohol Health and Research World, 18(3), 206-211.

Blackhart, G., Peruche, B., DeWall, C., & Joiner, T. (2006). Factors influencing teaching evaluations in higher education. Teaching of Psychology, 33(1), 37-39.

Boyd, R. T. C. (1989). Improving teacher evaluations. Practical Assessment, Research & Evaluation, 1(7). Retrieved from http://PAREonline.net/getvn.asp?v=l&n=7

Cross, K. P. (1999). Assessment to improve college instruction. In S. J. Messick (Ed.), Assessment in higher education: Issues of access, quality, student development, and public policy (pp. 35-45). Mahwah, NJ: Lawrence Erlbaum.

d'Apollonia, S., & Abrami, P. C. (1997). Navigating student ratings of instruction. American Psychologist, 52, 1198-1208.

Emerson, R., & Records, K. (2007, January). Design and testing of classroom and clinical teaching evaluation tools for nursing education. International Journal of Nursing Education Scholarship, 4(1). doi:10.2202/1548-923X.1375

Falchikov, N. (1995). Improving feedback to and from students. In P. Knight (Ed.), Assessment for learning in higher education. London, UK: Kogan Page Limited.

Gambrill, E. (1977). Behavior modification. San Francisco, CA: Jossey-Bass.

Keeley, J., Smith, D., & Buskist, W. (2006). The teacher behaviors checklist: Factor analysis of its utility for evaluating teaching. Teaching of Psychology, 33(2), 84-91.

Marsh, H. W., & Roche, L. A. (1997). Making students' evaluations of teaching effectiveness effective: The critical issues of validity, bias, and utility. American Psychologist, 52, 1187-1197.

McKeachie, W. J., & Kaplan, M. (1996). Persistent problems in evaluating college teaching. American Association of Higher Education Bulletin, 6, 5-8.

Palmer, P. J. (1998). Courage to teach. San Francisco, CA: Jossey-Bass.

Smith, B. (2007, December). Student ratings of teacher effectiveness: An analysis of end-of-course faculty evaluations. College Student Journal, 41,788-800.

Steiner, S., Gerdes, K., Holley, L., & Campbell, H. (2006). Evaluating teaching: Listening to students while acknowledging bias. Journal of Social Work Education, 42, 355-376.

Stevens, J. J., & Aleamoni, L. M. (1985). The use of evaluative feedback for instructional improvement: A longitudinal perspective. Instructional Science, 13, 285-304.

Wilkerson, J. R., & Lang, W. S. (2004). A standards-driven, task-based assessment approach for teacher credentialing with potential for college accreditation. Practical Assessment, Research & Evaluation, 9(12). Retrieved from http://PAREonline.net/getvn.asp?v=9&n=12

Accepted: 02/11

Richard Ager

Tulane University

Richard Ager is associate professor at Tulane University.

Address correspondence to Richard Ager, School of Social Work, Tulane University, 6823 St. Charles Avenue, New Orleans, LA 70118; e-mail: ager@tulane.edu.

DOI: 10.5175/JSWE.2012.200900098
COPYRIGHT 2012 Council On Social Work Education
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Ager, Richard
Publication:Journal of Social Work Education
Article Type:Report
Geographic Code:1USA
Date:Jan 1, 2012
Words:4864
Previous Article:Debt burdens among MSW graduates: a national cross-sectional study.
Next Article:The place of political diversity within the social work classroom.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |