Carnegie Mellon University School of Computer Science

Guidelines for the Evaluation of Teaching Faculty

I. PURPOSE

The “Policy on Teaching Track Appointments” describes the School of Computer Science implementation of the University Policy on Teaching Track Appointments. This document is meant to accompany that one by providing background and elaboration regarding the evaluation of teaching quality, a critical component of the portfolio used to determine R&P decisions. We also comment on the issue of “pseudo-tenure” as it relates to Teaching Track R&P. 

Context:  Over the past decade a wide variety of forms of teaching engagements in both undergraduate and graduate curricula have become central to the core educational mission of CMU. These include traditional lecture-based courses, project-oriented mentoring, distance delivery of courses, and lab-oriented courses. While in the past the teaching performed by CMU faculty was carried out in a classroom setting using traditional lectures, homework, exams, etc., today those instructors find themselves carrying out their educational activities in many new ways. These new ways are in response to a variety of factors, such as new models of education in which mentored, project-based work plays a primary role; new technologies that support distance- and remote-education; and internationalization of educational programs, among others.

Philosophy: Different forms of teaching require different instruments to evaluate their effectiveness. While traditional evaluation of teaching quality has relied on a single university-wide faculty course evaluation (FCE) instrument, today such an approach is inadequate to account for the different kinds of teaching models in use. Carnegie Mellon’s struggle over the last few years to find a satisfactory evaluation instrument for teaching is a strong indicator of this problem.  For example, the specific questions that one might ask students regarding the quality of instruction in a course will likely be different depending on whether the course is taught on-campus in a traditional lecture format, through taped videos in a distance-education setting, or through project mentorship in a project-oriented class.

 II. TEACHING MODELS

To determine the most appropriate instrument(s) for teaching evaluation one must first determine what kind of teaching model is being evaluated. Today we can identify at least the following forms, many of which are used in combination:

  • Lecture-based:  In these courses the primary body of material is communicated through lectures given by the instructor, supplemented by homework and examinations.
  • Project-based:  In these courses one or more student projects form the core of the educational experience of students. Instructors serve as mentors, guides, and critical appraisers of student project work.
  • Laboratory-based: In these courses students work in a laboratory setting, with oversight provided by the instructor. The role of the instructor is similar to that of a project-based course, although the actual work of students is typically carried out in a more controlled setting, and the duration of a project is much shorter.
  • Discussion-based: These courses or centered on seminars and recitations, in which the primary role of the instructor is to facilitate discussion.
  • Distance education: There are numerous models for distance education. One prominent form uses recorded lectures of faculty (e.g., distributed via DVD or streaming video) combined with electronic interaction with a distance education instructor – who may not be the person giving the recorded lectures.

 III. INSTRUMENTS FOR EVALUATION

It is beyond the scope of this document to detail specific instruments for teaching evaluation and their correspondence to teaching models such as those listed above. However, we can identify at least the following forms of evaluation that are currently being used at CMU.

  • Student Questionnaires: These allow students to evaluate a faculty’s teaching effectiveness through a set of questions designed to gauge both the quality of the course and the quality of the teaching. Traditional university course evaluations (aka FCEs) fit into this category. When choosing an appropriate questionnaire care should be taken to balance two competing needs: (a) the need to have a standardized instrument that will allow comparison across different courses, instructors, and semesters; and (b) the need to tailor the instrument to the model of course delivery and instruction. While it is clear that a single university-wide FCE is likely to be inadequate, so also is the opposite extreme of having distinct questionnaires for every course.
  • Direct Observation: Such instruments are based on direct evaluation of an instructor by peers or other members of the institution. This may come about through observation of classroom activity (in the case of a lecture-based model), or other engagements (e.g., observing chat rooms in the case of distance education).
  • Outcomes Evaluation: Such instruments attempt to evaluate teaching effectiveness in terms of what students have learned. These may focus on the competency of students as evidenced in final project presentations or exams. Or they may attempt to gauge the difference in what students know between the time they entered the class and when they finished it.

IV. PROCESS

The need to tailor the evaluation instrument to the model of instruction raises the issue of how an SCS Unit should carry out the evaluation of courses taught by its faculty. We recommend the following guidelines:

  • Before a course is taught the Unit offering it should determine what the appropriate evaluation method(s) will be. The decision might be based on Unit policies that match evaluation instruments with course models. Or it might involve the instructor in the process, who could help select the most appropriate instrument.
  • To support cross-course comparisons, there should be a small number of candidate evaluation mechanisms to choose from. When University-, or school-level instruments are available and appropriate, they should be used.
  • If a new kind of instrument is created it should be circulated for comment, and standardization – at least within the Unit, and possibly within the School.
  • When preparing the portfolio for a faculty member under review, the Unit should include rationale explaining why a given instrument was used for a given course, whenever that instrument is not widely used across the School. Additionally, the Unit should create a guide for faculty to consult during R&P proceedings to help faculty understand how to interpret the evaluation data.  These review guides should be kept at the SCS level for use in other cases that use similar instruments.

V. PSEUDO-TENURE

The committee considered the issue of whether the second appointment of a faculty within any teaching track rank should be considered to have more weight and/or a different process from other reappointments within that rank. This is the case, for example, in the Research Track where one of the reappointments is designated as a “pseudo-tenure” decision. Based on University Policy, past history within SCS, and general considerations of the philosophy behind reappointment and promotion in the teaching track, the committee wanted to be clear that no such distinction should be made between appointments within a rank. In particular, the University and SCS have worked hard to make sure that the teaching track is not under the control of a “promotion clock,” as it is important to allow faculty to remain indefinitely at the same level of appointment, if that is their career choice. Additionally, University policy is clear that teaching track appointments carry no guarantees of continued reappointment, and that full evaluations are required at each appointment, promotion and reappointment decision.

Top of Page