Printer Friendly

What do MBA students think of teacher evaluations?

Student evaluations of teaching effectiveness (SETE) have been extensively studied in the literature with the central debates revolving around the issues of validity and reliability of the instruments used. Scant attention however has been paid in this literature to student perceptions and views of SETEs and their usefulness. This paper, presents the preliminary results from a survey of MBA students from two Indian business schools where (anonymous) student evaluations of teachers are mandatory after course completion, and where they are also used as one of the key determinants for faculty promotion and tenure decisions. Most students felt that since these SETEs are not taken very seriously by them their reliability are questionable.

Introduction

Student Evaluations of Teaching Effectiveness (SETEs) are one of the many tools employed by academic institutions the world over to evaluate the performance of faculty members. Typically, SETEs consist of standardized questionnaires administered at the end of each course. The ratings thus obtained are used at times to decide the performance pay of faculty members and, in the long run, to make tenure related decisions. Apart from these ratings, academic institutions usually also consider other aspects such as research output and/or administrative work, for pay and tenure related decisions.

We now know that in the long run, output-based pay incentivizes 'good' workers to stay and 'bad' workers to leave a firm (Lazear, 1997). In other words, output-based pay enables the manager to strictly evaluate a worker's performance. However, output is not always easy to measure and often involves substantial monitoring costs. Input-based pay, on the other hand, which typically is the number of hours put in by the worker, is easier to implement, but does not al ways help the manager evaluate the worker. In the case of faculty members, implementing SETEs is an attempt to evaluate the worker in an input- based pay regime. Much of the literature on SETEs has focused on the reliability and validity of SETEs, and opinions of faculty members and administrators. Students as one of the most important stakeholders of the process have been relatively ignored. The objective of this paper is to document student perceptions of this process and to test for differences in perceptions among different sets of students.

Literature

SETEs have been extensively studied ever since they were introduced in the 1930s. Several studies have explored the validity and reliability of the survey instruments used and also the ability of the students to objectively evaluate faculty members. An extensive review concluded that student ratings are largely valid and reliable and they can provide valuable inputs to all the stakeholders involved (Marsh, 1987). While the central questions regarding the validity and reliability of SETEs were answered by the late 1980s, their acceptability was still in doubt, and reliability/validity studies on SETEs continue to be published to this day. Some have even argued that the research on SETEs over the past fifty years has been largely driven by the urge to prove or disprove the validity and reliability of SETEs without actually emphasizing on not misusing or misinterpreting them (Theall & Franklin, 2001).

Valsan et al. (2008) argued that SETEs merely performed a legitimizing function for management as SETEs were ill equipped to capture and evaluate any of the outcomes that an academic establishment aspires to achieve. They assert that when used for academic purposes, SETEs only lead to collusion between students and faculty resulting in various negative externalities such as grade inflation, dilution of academic rigor and sometimes even negatively affecting the careers of capable and qualified teachers.

A meta-validation model approach to understanding the reliability and validity of SETEs suggested that there is strong evidence in support of criterion validity but there is inadequate evidence in support of content or construct related validity of SETEs (Onwuegbuzie et al., 2007). Further studies have used the meta-validation model to review extant literature in SETE studies. To elaborate, criterion validity measures if a given set of variables measure a behavior that can also be corroborated by an existing or a future instrument. Content validity is the extent to which a measure or a set of variables explain a given construct (in our case, teaching ability). Construct validity on the other hand is the ability of the measure to demonstrate relationship between variables as would be anticipated.

An extensive survey of research conducted in the field of SETE studies after the year 2000 by Spooren et al. (2013), uses the lens of validity studies to classify and evaluate the progress made in the field. The authors, after considering a very large number of papers published in the past decade, largely support the conclusions arrived at by Onwuegbuzie et al. (2007). There is still no consensus on the questions that should constitute SETEs, even though there are many standardized questionnaires employed by universities the world over, primarily because there is no consensus on the construct of an 'ideal teacher'.

Several scholars have raised concerns and provided evidence about the lack of discriminant and divergent validity (i.e. if variables that are supposed to be unrelated are indeed unrelated). Some of the concerns are about the gender, race and personality of the instructor, unfairly influencing the ratings. However, the most common concern raised is about expected grades. Several empirical studies have shown that students give faculty members who give easy grades much higher ratings. A study based on SETE scores across three semesters by 2600 students at American University shows that ratings were heavily influenced by expected grades, gender and race of the instructor (Langbein, 1994). Another study by the same author found that, faculty members and students are engaged in a socially destructive game of grade inflation based on SETE scores across four years at the same university (Langbein, 2008).

Studies have also shown that the bias in student ratings could also vary with factors such as level of class being taught, interest in the subject matter before joining the class, the size of the class, the rigor of the course and the personality of the instructor (A1 Issa & Sulieman, 2007). These factors also determine student attitudes towards the process of end of course evaluations.

Coming to student perceptions of SETEs, the pervasive feeling is that SETEs are not really taken into consideration to determine performance pay, bonus or tenure of faculty members, and even if they were, it is not a sufficient motivation for students to seriously participate in the process of teaching evaluation. Chen and Hoshower (2003) use the expectancy theory framework to understand the pre- and post- participation attitudes of students. Student participation in the process is said to be determined by the degree of their faith in the administration to actually employ the SETEs to decide faculty pay or tenure.

Several studies have obtained student opinions of the process and have compared them with faculty perceptions to understand the conflicts. An example is the study by Sojka et al. (2002) comparing the responses of faculty and students at a midsized American university found that students were much less likely to agree that faculty graded leniently for better ratings, or that faculty's careers were tied to the ratings. Similar results were also reported by Mukherji et al. (2008) and Balain et al (2010). In both studies, students were less inclined than faculty to believe that faculty may be lenient in grading for the sake of better ratings and more inclined to believe that students were more inclined than faculty to believe that good teachers were indeed rewarded with better ratings.

The two studies which have focused exclusively on student perceptions are Marlin (1987) and Spencer et al. (2002). Both these studies confirmed that students are interested in the process of SETEs, but are not certain about their usage by their respective administrations. With a database of over 12000 students across disciplines and stages, they concluded that one of the most important ways in which the process can be made successful is by convincing the students that their opinions do matter. It was the need to be heard by the faculty members at various stages of learning during the course and to genuinely consider the feedback before coming back to teaching the course that dominated the students' expectations from the process. A similar conclusion was also made by Marlin (1987) who showed that if the administration intended to use the ratings for anything more than as a lip service, it needed to convince the students that it was indeed doing that.

Research Questions

The primary objective of this paper is to document student perceptions of the teaching feedback process. Therefore, the central questions for which we will be seeking answers from the students are: (a) their views on the credibility of the feedback process and (b) should faculty members get rewarded for better teaching, e.g., in terms of performance pay. Further, we postulate the following hypotheses:

Hypothesis 1: Students who have been introduced to the academic debates regarding SETEs are likely to have different attitudes to the usage of SETEs for determining faculty's performance pay and tenure related decision compared to students who are unaware of these debates

Hypothesis 2: Those students who participate in the process and have the experience of filling out end of course evaluations are likely to differ in their attitudes towards the process from those who do not.

Design & Databases

This research was conducted at the Indian Institute of Management Calcutta and Indian Institute of Management Ranchi (two of the premier business schools of India) during the 2013-14 academic year. The following is a description of databases collected at these two institutions.

Database 1

The student population at IIMC consists of primarily engineers with some amount of work experience. More than 90% of the students are between 21 and 27 years of age. It is mandatory for students to fill out feedback forms at the end of every course taught as a part of the two year post-graduate diploma program (PGP, equivalent to an MBA) and failure to fill the form attracts a monetary fine. This move was instituted in concurrence with the student representatives of the time and in spite of having the bargaining power to contest this, students have largely adhered to the process. As we have seen in the literature, one of the biggest concerns in administering SETEs is the lack of willingness on behalf of the students to participate in the process.

An elective titled 'The Economics of Human Resources' is offered to students in the second year of the PGP program. The elective covers the entire text of Lazear's (1997) 'Personnel Economics for Managers'. The elective also considers the academic labor market as an example and discusses the usage of SETEs and their consequences. At the end of the elective in the academic year 2013-2014, the following question was asked as a part of the end term examination to be answered on three fourths of a blank sheet of paper:

"At IIMC, students measure and rate professor performance through student evaluations after completion of the courses. Should IIMC offer bonuses or give raises to professors based on these performance measures? Why, or why not?"

As the elective was taken up by 74 students, as many responses were collected at the end of the course. We tried to make sense of this data by employing conventional content analysis, i.e., by breaking down each response into codes and documenting the total number of responses with each particular code (Table 1). It must be noted that the responses obtained as a part of the end term examination were not anonymous. As can be seen from Table 1, students strongly rejected use of SETEs and admitted to the randomness in the ratings filled by them. They also agreed that faculty members may grade liberally in return for good ratings. Two underlying themes emerged out of these codes: students did not believe that the ratings were an accurate measure of faculty's teaching effectiveness and students were largely not keen on using SETEs to determine performance pay or tenure of faculty members. An important concern specific to the context at IIMC was that, students attributed the randomness of the ratings to them being forced to give the ratings because of the mandatory requirement by the institute administration.

Database 2

Along with the PGP program, IIM Calcutta also offers a fellow program in management (FP, equivalent of a Ph.D). The students of this program typically are in their late twenties and come from diverse academic backgrounds. The FP students are generally expected to take up academic jobs after obtaining their degrees. The FP students attend exactly the same courses as the PGP students in their first year, but are not required to fill out the end of course evaluations. In fact, the internet portal where the feedback forms are filled does not have a provision for FP students to fill the form.

The data from Database 1 was used along with the survey instrument used by Marlin (1987) to develop a new survey questionnaire to measure student perceptions of the process.

The questionnaire used can be seen on the left hand column of Table 2. All of the questions except question 4 were to be answered on a 5 point Likert scale. For question 4, the four responses were < 15%, 15-30%, 30-50% and >50%. In the analysis, these responses have been weighted 4, 3, 2 and 1 respectively. As stated earlier, the questions largely measured two attitudes, namely, the validity and reliability of SETEs in the student's opinion and whether SETEs should be used in determining faculty's performance pay and tenure. The questionnaire also gave space for comments on the subject outside of the questions already asked.

We e-mailed the questionnaire to all of the PGP 2nd year students (batch of 460 students) and all FP students (85). This questionnaire was to be filled anonymously. We received responses from 91 PGP2 students and 34 FP students. The section to which each student belonged to was also noted as a part of the questionnaire. The batch of PGP students are divided into six sections. Three of these sections were introduced to academic debates surrounding SETEs as a part of their compulsory 'Human Resource Management' course in the first year (January 2014). The other three sections of students were not taught the module on academic labour markets. The questionnaire was administered to the PGP students eight months after they had attended the course on 'Human Resource Management' (September 2014).

Using this data, we intend to compare the responses of students who have participated in the process of SETEs (PGP) with those who have never filled out SETE forms (FP students). We also intend to compare the responses of those PGP students who were introduced to the academic debates surrounding SETEs with those who were not.

Database 3

The same survey was conducted before and after the elective 'The Economics of Human Resources', taught to a class of 50 students at IIM Ranchi. The elective was offered as a part of the second year coursework. IIM Ranchi too makes it mandatory for students to give ratings and uses these ratings as one of the factors to determine faculty's tenure and promotion. The 50 students who took the elective largely comprised engineers between the ages 21 and 26, much like IIMC. The difference between the student population at IIMC and IIM Ranchi however is that IIMC is ranked much higher than IIM Ranchi by various B-School surveys and students at IIMC come from better academic backgrounds and with much higher scores in the entrance examination. While IIMC is a 50 year old institution, IIM Ranchi is only 5 years old. These differences are likely to contribute to students' attitudes towards the administration and faculty.

Taught over a period of two months in the academic year 2013-14, the elective discussed important concepts in personnel economics, along with applications of the same in various fields, including academic labor markets. The course contents were exactly the same as the ones taught to second year students at IIMC and covered the full text of Lazear(1997). Students were made aware of the usage of SETEs to determine faculty's tenure and promotion and to the various debates and conflicts surrounding the usage of SETEs. 42 responses were obtained at the beginning of the course and 35 responses at the end of the course, both sets belonging to the same batch of 50 students (Table 4). Both sets of responses were obtained anonymously.

A comparison of these 'before' and 'after' responses would provide us with an insight into whether better information about the process, coupled with sensitization towards the academic debates surrounding these issues makes a difference in student responses.

Findings

Database 1

The written essays were coded as can be seen in Table 1. The five codes that stood out in terms of frequencies were: "Should not offer bonus based on ratings" (51%), "There is randomness in student ratings" (50%), "There is a chance that faculty are lenient in grading for ratings" (42%), "Other performance criteria should be considered along with ratings" (32%), and finally, "SETEs should not be compulsory" (38%). As Chen and Hoshower (2003) have pointed out, students are particularly resistant to participating in the SETE process. Only 20% of the respondents felt that SETEs should be used for setting performance pay. A response that almost captures all these codes is presented below:

"The answer to a certain extent will depend on the genuineness of the feedback. However, basing the entire bonus on student evaluation alone provides an opportunity to the professor to lobby with the students and offer the course in a way which is more likeable to the students. So, say the students wish to free-ride in the course, it then creates a scenario where the professor may make the course very easy for students. By corollary, a hard working professor (who also makes the students work for the grades) may not get the best feedback vis-a-vis the professor offering easy grades. There is also no incentive for students to vote for the professor who teaches the best. Having said this, it also does not mean that the student evaluation will be completely lopsided. There will be biases (including deviations towards the mean etc.), but some students do reflect before ratings. Therefore, possibly, the evaluation could be made one of the components of performance appraisal but not the only measure. It is necessary to ensure that the feedback/evaluation shows a reflection of how well the professor actually taught and nothing else".

Database 2

An independent group t-test was conducted between the PGP and FP samples (Table 2). The p values reported assume an alternative hypothesis that the means are different from each other ([H.sub.1]:[m.sub.2]-[m.sub.1]=0). As can be noted, the two samples did not differ on validity of SETEs, but differed somewhat significantly on whether they should be used to determine performance pay and the percentage of annual pay tied to SETEs. Specifically, the PGP students, who regularly participate in the SETEs, reposed more faith in the process by seeking performance pay and tenure to be tied to ratings more than the FPs (significant at 10%), but at the same time sought a lesser share of annual pay tied to ratings (significant at 10%). In addition, the PGP students significantly felt that faculty change their behavior to improve their SETE scores compared to the FP students. Finally, PGP students seriously consider the perceptions of their seniors about faculty before choosing their elective courses in the second year.

Another independent group t-test was conducted between the two groups of PGP students, those who were exposed to the academic labor market module in the HRM class ('with info' in the table) and those who were not ('without info'; Table 3). Similar to the above test, p values are reported for the alternative hypothesis, ([H.sub.1]:[[mu].sub.2] [[mu].sub.1][not equal to]0). The results were markedly different from the previous test, as those "with info" were less certain of using ratings for performance pay (significant at 5%) and sought a lesser share of annual pay to be tied to ratings (significant at 5%). Surprisingly, they ('with info') were less inclined to believe that faculty members may grade liberally to obtain better ratings (significant at 10%) and less insistent on having the faculty members evaluated at the end of every course.

Database 3

The third test was performed between responses obtained before and after coursework at 11M Ranchi. The coursework in personnel economics was supposed to sensitize the participants and help them make a more informed choice (Table 4, the p values reported are for the alternative hypothesis, ([H.sub.1][[mu].sub.2]-[[mu].sub.1][not equal to]0). The results were however much more counter intuitive. After the coursework, students reposed more faith in SETEs (Question no. 1, significant at 1%), were more certain that students were accurate in assessing faculty members (also significant at 1%), but at the same time recognized that faculty members may indulge in easier grading (significant at 10%) and also that they may be unfairly rated (weakly significant). In other words, students at once both admitted that faculty may be unfairly rated, and also claimed that students are accurate in evaluating faculty members.

An explanation to this can be found in the text boxes that were provided in each questionnaire for the respondent to air any further views on the process. Several students complained that they were not sure if the administration was really using the ratings and they were not certain if this process could actually result in teachers either improving or getting replaced. Students were not even sure if they had the complete freedom to rate a faculty member as they wanted to. A few selected responses from the students have been tabulated in Table 5. During the course, they were informed that the administration indeed considered the SETEs in determining the tenure and promotion of faculty members and that student ratings are thus an important aspect in the career of a teacher. We believe that the unexpected spike in the faith reposed on the process of student ratings is a result of the realization that students indeed have the power to affect the career of a professor. However, the students do recognize that faculty may be rated unfairly, and also weakly agree that faculty may grade liberally to achieve better ratings.

Conclusion

We have observed that there are certainly some differences between groups of students who participate in the SETE process and those who do not. Similarly, there are significant differences between those who have more information regarding the process than those who do not.

One limitation to this study is that it does not differentiate between the difference in response due to having the additional 'information' that SETEs are used by the administration and the difference in response due to being more 'aware' of the problems surrounding SETEs. In other words, do the changes in student attitudes reflect the realization of the control they have on teacher careers or do they reflect the knowledge that student ratings can even break the career of a good teacher? The data obtained at Ranchi show that the change is due to the new information that students do have some control on faculty pay. The data obtained at IIMC shows that those with information are more sober towards using SETEs to determine faculty pay and tenure.

As the extant literature has suggested, it appears that communication about the usage of SETEs by the administration indeed matters. Students do want to be heard and told that their opinion matters. As Spencer et al. (2002) suggest, it is important for the administration to communicate how the SETEs are being used to the students to minimize the random noise in the responses. Secondly, as students have reported in Database 1, making the student feedback mandatory only contributes further to the noise and crowds out any serious feedback provided by some students. As one of the students at Ranchi said: "It should be optional, in this way only those who are extremely satisfied or extremely dissatisfied will fill the evaluation form". From this statement it becomes clear that an optional/voluntary SETE process may just reflect densities only at the two tails of the distribution.

Debashish Bhattacherjee (debashish@iimcal.ac.in) & K.V. Ravishankar are from Indian Institute of Management Calcutta. Paper presented at the Second International Conference on Education, Social Sciences and Humanities, Istanbul. June 8-10 2015.

References

Al Issa. A. & Sulieman, H. (2007). "Student Evaluations of Teaching: Perceptions and Biasing Factors", Quality Assurance in Education, 15(3): 302-317. doi: 10.1108/ 09684880710773183

Balam, E. M. & Shannon, D. M. (2010), "Student Ratings of College Teachings: a Comparison of Faculty and Their Students", Assessment and Evaluation in Higher Education, 35(2): 209-221.

Chen. Y. & Hoshower, L. B. (2003)," Student Evaluation of Teaching Perception and Motivation". 28(I). http://doi: 10.1080/ 0260293032000033071

JW Marlin Jr. (1987), "Student Perceptions of End-of-course Evaluations", The Journal of Higher Education, 58(6), 704-716. Retrieved from http://www.jstor.org/stable/ 1981105

Langbein, L. (2008), "Management by Results:. Student Evaluation of Faculty Teaching and the Mis-measurement of Performance", Economics of Education Review, 27(4): 417-428. doi: 10.1016/j.econedurev. 2006.12.003

Langbein, L. I. (1994), The Validity of Student Evaluations of Teaching, Political Science and Politics, 27(3): 545-53.

Lazear, E. (1997), Personnel Economics for Managers, New York: Wiley.

Marsh, H. W. (1987), "Students' Evaluations of University Teaching: Research Findings, Methodological Issues, and Directions of Future Research", International Journal of Education Research, 11: 253-87.

Mukherji, S. & Rustagi, N. (2008), "Teaching Evaluations", Journal of College Teaching & Learning, 5(9): 45-54.

Onwuegbuzie, A. J., Daniel, L. G. & Collins, K. M. T. (2007), "A Meta-validation Model for Assessing the Score-validity of Student Teaching Evaluations", Quality & Quantity, 43(2). 197-209. doi: 10.10071/s11135-0079112-4

Sojka, J., Gupta, A. K.., Deeter-schmelz, D. R., Teaching, S. C., Spring, N. & Taylor, P. (2002)," Differences All use subject to JSTOR Terms and Conditions Student Perceptions Evaluations A Study of of of Similarities Faculty Student Teaching and Differences". College Teaching, 50(2), 44-49.

Spencer, K. J. & Schmelkin, L. P. (2002), "Student Perspectives on Teaching and its Evaluation", Assessment & Evaluation in Higher Education. 27(5): 2002.

Spooren, P., Brockx, B. & Mortelmans, D. (2013), "On the validity of Student Evaluation of Teaching: State of the Art", Review of Educational Research, SJ(4): 598-642. doi: 10.3102/0034654313496870

Theall. M. & Franklin, J. (2001), "Looking for Bias in All the Wrong Places: A Search for Truth or a Witch Hunt in Student Ratings of Instruction"? New Directions for Institutional Research, 2001 (109): 45-56. doi: 10.1002/ir.3

Valsan, C. & Sproule, R. (2008), "The Invisible Hands behind the Student Evaluation of Teaching: The Rise of the New Managerial Elite in the Governance of Higher Education", Journal of Economic Issues, XLII(N).
Table 1 Content Analysis of Responses Given as a Part of
the End Term Examination

Code                                            Count    Percentage
                                                        of Responses

Should not offer bonus based on ratings            38             51

There is randomness in student ratings             37             50

We should use the performance measures to          15             20
give raises

There is a chance that faculty may grade           31             42
leniently for better ratings

A low bonus based on ratings may be                11             15
considered

Other performance criteria should be               24             32
considered along with this

SETEs should not be compulsory                     28             38

Student communicated about how ratings are          2              3
used

There may be skewness if class strength is          4              5
low

Student Ratings will keep the profs in check        1              1
and continuously perform

Students do not have any incentive to               3              4
correctly rate faculty

Each prof is not rated by the same set of           7              9
students. This may be unfair to profs.

Teaching ability is not a relevant criteria         2              3

Sometimes the content may be too hard for           4              5
the students to understand and lead to them
rating the prof low

Students cannot possibly judge faculty              2              3
members

Students may reward a prof who grades               1              1
leniently

There is no scope for profs to learn on             1              1
the job

Students may display bias such as                   2              3
gender/region/castc etc.

Faculty members may not like their                  1              1
bonuses/raises controlled by students

Table 2 Differences between Responses of PGP & FP Students

                                                FP=34

Question                                   Mean     Std. Dev

1. It is possible to measure           3.323529    0.9118941
teaching effectiveness using
end of course evaluations

2. Students are fair and accurate      2.882353     1.007989
and give enough thought in rating
faculty members in end of
course evaluations (SETEs)

3. Faculty members should be           3.058824     1.071424
paid variable/performance pay
based on SETEs

4. What proportion of variable         3.588235     0.608906
or performance pay should be
based on SETEs?

5. Faculty members can change          3.705882    0.8714117
their teaching behavior or
improve based on SETEs
(ratings)

6. Parameters other than               3.823529    0.8693637
teaching performance such as
research output should be
given more weightage in
assessing performance of
faculty members

7. Faculty members'                    2.647059    0.9497162
tenure/promotion should be
decided based on SETEs

8. You consider student                3.823526    0.9035482
perceptions (e.g. from senior
batches) of a faculty member's
teaching ability before choosing
to do their course

9. There is a chance that              3.617647    0.9851844
faculty members make the course
too easy or grade liberally
for better ratings

10. There is a chance that             3.636364    0.7833495
faculty members are unfairly
rated

11. Faculty members' teaching          3.735294     1.081772
performance must be measured
every time they teach a course

                                                PGP=91

Question                                   Mean    Std. Error

1. It is possible to measure             3.3186      1.031494
teaching effectiveness using
end of course evaluations

2. Students are fair and accurate      2.604396     0.9761503
and give enough thought in rating
faculty members in end of
course evaluations (SETEs)

3. Faculty members should be           3.417582        1.0442
paid variable/performance pay
based on SETEs

4. What proportion of variable         3.351648     0.7359593
or performance pay should be
based on SETEs?

5. Faculty members can change                 4     0.7302967
their teaching behavior or
improve based on SETEs
(ratings)

6. Parameters other than               3.582418      0.803685
teaching performance such as
research output should be
given more weightage in
assessing performance of
faculty members

7. Faculty members'                    3.417582      1.033505
tenure/promotion should be
decided based on SETEs

8. You consider student                4.362637     0.7960525
perceptions (e.g. from senior
batches) of a faculty member's
teaching ability before choosing
to do their course

9. There is a chance that              3.582418      1.054787
faculty members make the course
too easy or grade liberally
for better ratings

10. There is a chance that             3.586207     0.8429147
faculty members are unfairly
rated

11. Faculty members' teaching                 4      1.044466
performance must be measured
every time they teach a course

                                              Indep. T-test

Question                               t statistic      P Value

1. It is possible to measure               -0.0241       0.9808
teaching effectiveness using
end of course evaluations

2. Students are fair and accurate          -1.4042       0.1628
and give enough thought in rating
faculty members in end of
course evaluations (SETEs)

3. Faculty members should be                1.6973     0.0922 *
paid variable/performance pay
based on SETEs

4. What proportion of variable             -1.6717     0.0971 *
or performance pay should be
based on SETEs?

5. Faculty members can change               1.8986       0.06 *
their teaching behavior or
improve based on SETEs
(ratings)

6. Parameters other than                   -1.4596       0.1469
teaching performance such as
research output should be
given more weightage in
assessing performance of
faculty members

7. Faculty members'                         3.7891    0.0002 **
tenure/promotion should be
decided based on SETEs

8. You consider student                     3.2461    0.0015 **
perceptions (e.g. from senior
batches) of a faculty member's
teaching ability before choosing
to do their course

9. There is a chance that                  -0.1691        0.866
faculty members make the course
too easy or grade liberally
for better ratings

10. There is a chance that                 -0.2966       0.7673
faculty members are unfairly
rated

11. Faculty members' teaching               1.2448       0.2156
performance must be measured
every time they teach a course

* Significant at 10%

** Significant at 1%

Table 3 Difference between Responses of Students with and
without Exposure to Debates on SETEs

                                            With Info = 43

Question                                   Mean    Std. Dev.

1 .It is possible to measure           3.209302     1.145553
teaching effectiveness using
end of course evaluations

2. Students are fair and accurate      2.534884     0.797282
and give enough thought in
rating faculty members in end
of course evaluations (SETEs)

3. Faculty members should be           3.186047     1.006072
paid variable/performance pay
based on SETEs

4. What proportion of variable         3.511628      0.66805
or performance pay should be
based on SETEs?

5. Faculty members can change          3.906977     0.750046
their teaching behavior or
improve based on SETEs (ratings)

6. Parameters other than teaching       3.72033     0.766152
performance such as research output
should be given more weightage in
assessing performance of faculty
members

7. Faculty members' tenure/             3.27907     0.959305
promotion should be decided
based on SETEs

8. You consider student                4.209302     0.803508
perceptions (e.g. from senior
batches) of a faculty member's
teaching ability before choosing
to do their course

9. There is a chance that faculty      3.348837     1.110206
members make the course too
easy or grade liberally for
better ratings

10. There is a chance that             3.609756     1.069534
faculty members are unfairly
rated

11. Faculty members' teaching           3.72093     1.119642
performance must be measured
every time they teach a course

                                          Without Info= 48

Question                                   Mean    Std. Dev.

1 .It is possible to measure           3.416667     0.918679
teaching effectiveness using
end of course evaluations

2. Students are fair and accurate       2.66667     1.117241
and give enough thought in
rating faculty members in end
of course evaluations (SETEs)

3. Faculty members should be              3.625     1.044234
paid variable/performance pay
based on SETEs

4. What proportion of variable         3.208333     0.770696
or performance pay should be
based on SETEs?

5. Faculty members can change          4.083333      0.70961
their teaching behavior or
improve based on SETEs (ratings)

6. Parameters other than teaching      3.458333     0.824062
performance such as research output
should be given more weightage in
assessing performance of faculty
members

7. Faculty members' tenure/            3.541667     1.090741
promotion should be decided
based on SETEs

8. You consider student                     4.5     0.771845
perceptions (e.g. from senior
batches) of a faculty member's
teaching ability before choosing
to do their course

9. There is a chance that faculty      3.791667     0.966422
members make the course too
easy or grade liberally for
better ratings

10. There is a chance that             3.565217     0.583178
faculty members are unfairly
rated

11. Faculty members' teaching           4.26087     0.905165
performance must be measured
every time they teach a course

                                             Indep. T-test

Question                               t statistic      P Value

1 .It is possible to measure                -0.957       0.3412
teaching effectiveness using
end of course evaluations

2. Students are fair and accurate          -0.6408       0.5233
and give enough thought in
rating faculty members in end
of course evaluations (SETEs)

3. Faculty members should be               -2.0367    0.0446 **
paid variable/performance pay
based on SETEs

4. What proportion of variable              1.9949    0.0491 **
or performance pay should be
based on SETEs?

5. Faculty members can change              -1.1522       0.2523
their teaching behavior or
improve based on SETEs (ratings)

6. Parameters other than teaching           1.5687       0.1203
performance such as research output
should be given more weightage in
assessing performance of faculty
members

7. Faculty members' tenure/                -1.2132       0.2282
promotion should be decided
based on SETEs

8. You consider student                    -1.7593     0.0820 *
perceptions (e.g. from senior
batches) of a faculty member's
teaching ability before choosing
to do their course

9. There is a chance that faculty           -2.034    0.0449 **
members make the course too
easy or grade liberally for
better ratings

10. There is a chance that                  0.2447       0.8073
faculty members are unfairly
rated

11. Faculty members' teaching              -2.5093    0.0139 **
performance must be measured
every time they teach a course

* Significant at 10%

** Significant at 5%

Table 4 Differences between Responses of Students before and after a
Course in Personnel Economics with Discussion on SETEs and
Academic Labor Market

                                                After N=35

Question                                       Mean    Std. Dev.

1. It is possible to measure teaching      3.595238     0.857095
effectiveness using end of course
evaluations

2. Students are fair and accurate and      2.880952     1.108776
give enough thought in rating faculty
members in end of course evaluations
(SETEs)

3. Faculty members should be paid           3.97619     0.896826
variable/performance pay based on SETEs

4. What proportion of variable or          3.238095     0.617213
performance pay should be based on
SETEs?

5. Faculty members can change their        4.333333     0.570266
teaching behavior or improve based on
SETEs (ratings)

6. Parameters other than teaching               3.5     0.943527
performance such as research output
should be given more weightage in
assessing performance of faculty
members

7. Faculty members' tenure/ promotion      3.690476     0.869205
should be decided based on SETEs

8. You consider student perceptions        3.571429       0.8306
(e.g. from senior batches) of a faculty
member's teaching ability before
choosing to do their course

9. There is a chance that faculty          3.285714     1.088425
members make the course too easy or
grade liberally for better ratings

10. There is a chance that faculty         3.571429     0.940754
members are unfairly rated

11. Faculty members' teaching              4.309524     0.680319
performance must be measured every time
they teach a course

                                                  Before N=42

Question                                       Mean     Std. Dev.

1. It is possible to measure teaching      4.057143      0.591253
effectiveness using end of course
evaluations

2. Students are fair and accurate and      3.342857      1.083102
give enough thought in rating faculty
members in end of course evaluations
(SETEs)

3. Faculty members should be paid          3.685714      0.832128
variable/performance pay based on SETEs

4. What proportion of variable or          3.085714      0.701739
performance pay should be based on
SETEs?

5. Faculty members can change their        4.171429      0.513678
teaching behavior or improve based on
SETEs (ratings)

6. Parameters other than teaching          3.228571       1.05957
performance such as research output
should be given more weightage in
assessing performance of faculty
members

7. Faculty members' tenure/ promotion      3.657143      0.968409
should be decided based on SETEs

8. You consider student perceptions        3.514286      0.886879
(e.g. from senior batches) of a faculty
member's teaching ability before
choosing to do their course

9. There is a chance that faculty          3.657143    0.968409 0
members make the course too easy or
grade liberally for better ratings

10. There is a chance that faculty         3.914286      0.562109
members are unfairly rated

11. Faculty members' teaching                   4.4       0.49705
performance must be measured every time
they teach a course

                                                Indep. T-test

Question                                   t statistic      P Value

1. It is possible to measure teaching           2.6968    0.0086 **
effectiveness using end of course
evaluations

2. Students are fair and accurate and           1.8394     0.0698 *
give enough thought in rating faculty
members in end of course evaluations
(SETEs)

3. Faculty members should be paid               -1.462       0.1479
variable/performance pay based on SETEs

4. What proportion of variable or              -1.0136       0.3140
performance pay should be based on
SETEs?

5. Faculty members can change their            -1.2972       0.1985
teaching behavior or improve based on
SETEs (ratings)

6. Parameters other than teaching              -1.1886       0.2384
performance such as research output
should be given more weightage in
assessing performance of faculty
members

7. Faculty members' tenure/ promotion          -0.1591       0.8740
should be decided based on SETEs

8. You consider student perceptions            -0.2915       0.7715
(e.g. from senior batches) of a faculty
member's teaching ability before
choosing to do their course

9. There is a chance that faculty               1.5669       0.1214
members make the course too easy or
grade liberally for better ratings

10. There is a chance that faculty              1.8918     0.0624 *
members are unfairly rated

11. Faculty members' teaching                   0.6543       0.5149
performance must be measured every time
they teach a course

* Significant at 10%

** Significant at 5%

Table 5 Some Responses Given by the Students at IIM Ranchi
as Views Aside from the Questions Asked

S. No    Response

1        I have observed at most times students do not give enough
         thought while rating a faculty. They perform the feedback
         process for the sake of it. The management should take
         steps to create awareness of its importance among students.

2        It is often observed that inspite of the poor ratings that
         an individual faculty member receives, no actionable steps
         are taken by the institute or the faculty members themselves
         to set things right. A practice to share the feedback with
         the faculty should be instituted and the faculty should be
         marked on the improvements that he/she has shown on an
         incremental year on year basis

3        The students with least interest in the subject tend to be
         the most negative about the profs as they never get to
         understand how much the prof has actually put in and thus
         give a bad feedback to the faculty. A measure which links
         the students" marks and the feedback given should be made
         which will negate such incidents. A student getting a D/C
         will most likely give a bad feedback which may not be right

4        Students should be clearly communicated to about
         the anonymity of the process

5        Changes should be necessarily implemented if
         suggested by majority
COPYRIGHT 2016 Shri Ram Centre for Industrial Relations and Human Resources
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Bhattacherjee, Debashish; Ravishankar, K.V.
Publication:Indian Journal of Industrial Relations
Geographic Code:1USA
Date:Apr 1, 2016
Words:7022
Previous Article:Factors that facilitate innovation in the conventional heavy engineering industry.
Next Article:Performance appraisal fairness & its outcomes a study of Indian banks.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters