However, Mike Birnbaum's survey addresses a different and very important issue. Namely, how do professor respond to student evaluations. By and large college professors are not total dummies. They know that student evaluations can affect their chances for promotion and tenure. This is particularly true at institutions like the one the Irascible Professor teaches at (Krispy Kreme U), where we boast that "learning is preeminent".
The results from Mike's survey provide us with some understanding of why grade inflation permeates the academy. For example, 65.4% of the faculty respondents to the survey felt that raising grading standards in a course would result in lower student evaluations, while only 3.4% felt that the opposite would be true. At the same time 57.2% felt that raising grading standards would increase student learning, while only 7.7% felt that less learning would take place.
More importantly, 72.1% of the faculty respondents felt that the use of student evaluations encourage faculty to "water down" the content in their courses, while only 26.9% felt that this was not true. 48.6% of the respondents report that they now present less material in their courses than they have in the past, compared to only 14.9% who report that they present more material now. 32.2% of the respondents admit that they now use lower standards for a passing grade than in the past, compared to 7.2% who say that they use higher standards.
Given this trend, it was not too surprising to find that the faculty respondents feel that only about 60% of the students graduating from their departments "possess the general education, specific skills, and knowledge base that should be required of a graduate". In other words folks, two of every five of our graduates should not have received that BS or BA degree.
Mike also surveyed a smaller sample (142) of lower division (freshman and sophomore) students to see how they would rate 89 hypothetical classes based on three variables. These included the course content, grading standards, and individual instructor characteristics. The results were remarkably consistent. 94.4% of the students gave higher evaluations to "an attractive, well dressed, 36 year old female with a nice personality" than to a "62 year old male with a slight tremor... who doesn't smile in class". 92.3% gave higher ratings to a class with "light" content than to a class with "heavy" content. In this case, "light" and "heavy" referred to the amount of reading required and the amount of out of class work required. 97.9% of the students gave higher ratings to a course with "very easy" grading standards than to a course with "very hard" grading standards. "Very easy" was defined to mean that the instructor gave mostly A and B grades, and seldom gave a C grade. The "very hard" grading standard was 7% A, 13% B, 40% C, 25% D, and 15% F. To be sure, this is probably a more stringent grading standard than most instructors would use; but, only 9.8% of students gave their highest rating to courses with "medium-easy" or "medium hard" grading standards. "Medium-hard" meant 30% A and B grades, 50% C, and 20% D and F grades.
What does all this mean? Well, in the Irascible Professor's opinion, the pervasive use of student evaluations of teaching in retention, tenure, and promotion decisions has had a negative effect on standards. This does not mean, however, that student evaluations of teaching are without value. The IP has been a department chair (some would say that this is a genuinely wooden post) for 10 of his 30 years at Krispy Kreme U. Part of his responsibilities has been to review the student evaluations for all the instructors in the department. He has found that if an instructor is truly inept that shows up in evaluations that are significantly below departmental averages semester after semester. Likewise, if an instructor has an "attitude" problem that shows up as well. However, once you get beyond that small group with evaluations that are significantly below the mean, it becomes hard to differentiate between instructors. Often the most "entertaining" instructors receive very high ratings, but there is no evidence that their students learn more. Occasionally, one does notice a trend of declining student evaluation scores for a faculty member who at one time received decent evaluations. This can be a signal that an instructor is losing interest in his or her teaching.
On the whole, in the IP's opinion, student evaluations of teaching do not measure learning in any significant way. Instead, they measure student "satisfaction" with the instructor and with other aspects of the course. That is important information, but at best very incomplete information about how good a job a particular instructor is doing. For that reason, other measures of evaluation of instructor performance are sorely needed.
Professor invites your comments.
©1999 Dr. Mark H. Shapiro - All rights reserved.