Evaluating Department Achievements: Consequences for the Work of Faculty
Evaluating Department Achievements: Consequences for the Work of Faculty
By Jon F. Wergin

From the December 1999 AAHE Bulletin

Some colleges have learned how to change the program assessment process from a dreaded chore to a learning experience. Here are their strategies.


See if this scenario describes what happens in your own department: Every five to seven years your unit comes up for a formal program review. It’s an event that does not inspire much enthusiasm among the faculty. There’s plenty of busywork as the self-study is prepared, and there’s a lot of grumbling about wasted time and bureaucratic intrusion. Everyone’s goal is to get through the process with a minimum of aggravation. When it’s over, the final report disappears into the bowels of the administration and nothing much changes; but at least the faculty won’t have to worry about it again for awhile. This whole pageant is repeated, with minor variations, for regional and specialized accreditation, student outcomes assessment, and various ad hoc strategic planning initiatives.

If all this seems distressingly familiar, you’re not alone. Last year I was asked by the Pew Charitable Trusts, in conjunction with AAHE’s New Pathways II project, to undertake a survey of quality assurance practices in academic departments, and I’ll give away one of the key findings right now: There is plenty of assessment going on. Academic departments are evaluated in as many as five different ways — formal program review, student outcomes assessment, regional accreditation, specialized/professional accreditation, and performance-based budgeting. Some lucky departments are faced with all five, sometimes in the same year. But what is the cumulative impact of all this evaluation on the department and its faculty? How are all these data used internally? Are faculty work lives changing, in useful and constructive ways, or is evaluation instead making faculty work more onerous and bureaucratic? What kinds of evaluation policies and practices encourage constructive change in departments and a stronger culture of collective responsibility there? Some of the answers to these questions might surprise you.

First, however, a little background. My associate Judi Swingen and I began this project in August of 1998. We reviewed the literature, called upon personal networks of informants, and sent a mass mailing to all campus provosts. We eventually studied about 130 institutions across the Carnegie categories. From some we merely collected information on paper; for others we conducted telephone interviews with key campus participants; and for eight institutions we wrote extensive case studies based on two-day visits to the campuses.

We found widespread discontent with how institutions evaluate their academic departments. Among several roots of campus unhappiness, one was most striking: Most departments and most faculty failed to see the relevance of program evaluation and assessment to the work they did. The dominant mood on the campuses we studied was that program review and other forms of departmental assessment are largely ritualistic and time-consuming affairs, mandated from above, having few real consequences for the lives of the faculty.

A Note From Gene Rice, Director, AAHE Forum on Faculty Roles & Rewards

Jon F. Wergin’s work on the evaluation of the department has important implications for the evaluation of the work of individual faculty and the future of the academic career. Contributions to the mission of the department will play an increasingly central role in the evaluation and rewarding of faculty. One way or another, faculty must learn to share what is an ever increasing load. Will we move toward "unbundling the faculty role," as is being urged, or toward "differentiated staffing"? What will happen to the vision of "the complete scholar" called for in Scholarship Reconsidered?

This article is a condensed version of a longer report, Evaluating Academic Departments: Best Practices, Institutional Implications, written principally for academic administrators and available in early 2000 from AAHE as part of the New Pathways Working Paper Series of the Forum on Faculty Roles & Rewards.


Components of Effective Evaluation at the Departmental Level

Despite this rather bleak portrayal, we found some notable exceptions, places where unit evaluation informed judgments of worth and improved departmental functioning. These institutions do three things well: The organizational and cultural setting promotes a conducive atmosphere for evaluation; evaluation policies and practices are viewed as credible and fair; and evaluation criteria and standards are scrutinized carefully. I’ll describe each of these characteristics in turn.

Organizational and cultural setting. Sometimes formal campus policies for assessment or program review seemed to make little difference; what did matter was effective leadership. Campuses with successful practices concerned themselves first with building an institutional climate supportive of quality improvement. For example, when the provost at a private research university with a history of successful program review was asked how he would go about initiating evaluation in another institution, he said this: "First I’d take a measure of the institution and its vision for the future. Is there ambition for change? I would try to find ways of articulating a higher degree of aspiration; if there wasn’t a strong appetite for this, then program review would be doomed to failure."

The elements of a "quality" institutional climate as suggested by the institutions we reviewed were these:

  • A leadership of engagement: leaders who are able to frame issues clearly, put clear choices before the faculty, and be open to negotiation about what will inform these decisions. Of all the elements of organizational climate, this one was the most important.
  • Engaged departments: departments that ask very basic questions about themselves — "What are we trying to do? Why are we trying to do it? Why are we doing it that way? How do we know it works?" Evaluation helps to define the academic work of the department.
  • A culture of evidence: a spirit of reflection and continuous improvement based on data, an almost matter-of-fact acceptance of the need for evidence as a tool for decision making.
  • A culture of peer collaboration and peer review: negotiation of common criteria and standards for evaluation based on a shared understanding by departmental faculty of one another’s work.
  • A respect for difference: a differentiation of faculty roles, leading to a shift in focus of the evaluation from work that is judged by standards external to the unit ("merit") to the contribution of the faculty member to the mission of the unit ("worth").
  • Evaluation with consequence: a tangible, visible impact on resource allocation decisions, without being so consequential that the process turns into a high-stakes political exercise.

Evaluation policies and practices. We found that, overall, the most effective policies and practices were both flexible and decentralized. Units were invited to define for themselves the critical evaluation questions, the key stakeholders and sources of evidence, and the most appropriate analysis and interpretation procedures. This suggests that institutions should focus less on accountability for achieving certain predetermined results and more on how well units conduct evaluations for themselves and use the data these evaluations generate. Rewards then accrue to units that can show how they have used the assessment to solve problems and resolve issues. This notion is similar to "academic audit" procedures currently in widespread use in Western Europe and Hong Kong: Rather than attempting to evaluate quality itself, the focus instead is on processes believed to produce quality.

Evaluation criteria and standards. Criteria are the kinds of evidence collected as markers of quality; standards are the benchmarks against which the evidence is compared. We found many problems with the use of evidence. It wasn’t that institutions lacked information that might lead to judgments about departmental quality: our database contained examples of more than 100 quality indicators, and these were distributed fairly evenly across "input" (faculty qualifications, FTEs), "process" (curriculum quality, demands on students), and "output" (faculty publications, student learning) criteria. The problem rather was that in almost no case did institutional procedures call for examining the quality of the evidence itself, and this can lead to some bizarre consequences. Here is a particularly egregious example: At one flagship university the assessment of "quality of instruction" includes these criteria: number of full-time faculty (undergraduate and graduate), total students per full-time faculty (undergraduate, master’s, and doctoral), and degrees awarded per faculty (baccalaureate, master’s, and doctorate). How, one might ask, do these indices qualify as markers of instructional quality? What do they have to do with the quality of student learning?

Further, we found a widespread lack of clarity and agreement about what the standards should be. For example, what is the most appropriate standard for departmental research productivity: Departmental goals negotiated earlier with the dean? Last year’s performance? The extent to which the scholarship fits within school priorities or the university’s strategic plan? Or how well the department stacks up against its "peer" departments in other institutions? Standards considered important or credible by one stakeholder group may not be considered important at all by another; thus, departmental quality will always be in the eye of the beholder.

Recommendations for Good Practice

What does all this suggest about what institutions could do to promote more effective unit evaluation? Here are four possibilities.

1. Be proactive in discussions of "quality."
Too many conversations about assessment proceed from the assumption of shared definitions of quality. At most institutions the assumption seems to be that determining quality is mostly a problem of data collection — that all we need to do is to find the right instrument or set of indicators. In the search for tools, the standards by which judgments of worth are made are largely ignored. The message is plain. Any campus wishing to develop sounder and more useful evaluation of academic programs must address the multidimensional meanings of quality, as slippery and elusive as the concept is. Departmental faculty will be more likely to take seriously an evaluation that uses criteria chosen for their credibility, not just for the convenience or accessibility of the evidence.

2. Decentralize evaluation to the maximum possible extent.
The "maximum possible extent" is the point at which the evaluation strikes a meaningful balance between the objectives of the department and the appropriate needs of the institution. A good way to encourage discussions of quality is to begin them at the level closest to the work of the faculty, namely in the academic programs themselves. One way for departments to reaffirm their commitment to academic quality is for them to embrace not just a set of mission and goal statements but a well-articulated set of principles that reflect quality attributes to which the department aspires, and for which its faculty are willing to be held mutually responsible.

3. Recognize that evaluation is not for amateurs: Address the developmental needs of deans, chairs, and faculty.
I don’t mean to be pejorative about this, nor to imply that program assessment should be turned over to a cadre of specialists. I only want to suggest that using information well is a learned skill. Faculty members trained as chemists, historians, or physical therapists usually receive little if any training in topics such as evaluating data quality and cross-examining evaluative evidence, even though these are crucial skills to bring to a review team.

4. Focus not just on enhancing collaboration and teamwork but on "organizational motivation."
The problem that dogs many administrators — and one of the most common themes emerging from our site visits — is how to foster a department’s internal commitment to quality and change. How can an institution reconcile a faculty member’s personal goals (the need for income, security, academic freedom, and autonomy) with the collective goals of the department and the institution?

Simply encouraging collaboration is not likely to work; a deeper understanding is needed of faculty preferences. If the unit of assessment is to shift from individual to collective achievement, a different model of motivation is needed. One alternative might be the concept of "organizational motivation," which suggests that faculty members will contribute to the group — even at the expense of self-interest — if they (a) identify with their institution, and (b) think their behavior will affect the institution in a positive way. (For a fuller explanation of this concept, see B.M. Staw’s "Motivation Research Versus the Art of Faculty Management" in J.L. Bess’s College and University Organization: Insights From the Behavioral Sciences. [New York University Press, 1984].) Thus, rather than focusing all of their attention on "reward systems," university administrators might be well advised to nurture faculty members’ affiliation with the institution, through socialization experiences, ceremonies, and other symbolic acts; by acknowledging faculty whose work benefits the institution; and by removing existing disincentives to participation in institutional citizenship. Administrators might also consider encouraging more discourse about what the institution does and should do for faculty.

A Final Note

The way an institution sees itself is reflected in how it evaluates. Among the institutions we studied, those that were most successful with departmental and unit assessment took the long view: administrators took the time to develop a commitment to and an energy for change, and then looked to assessment to help move the change along. They charged each department with the responsibility of identifying and answering its own evaluation questions, and held each accountable for doing that effectively. They enhanced organizational motivation by sharing information and encouraging dialogue. Finally, they took seriously issues of data quality and credibility. Administrators elsewhere interested in creating a more self-regarding institution would do well to follow their example.


Jon F. Wergin is a professor of educational studies at Virginia Commonwealth University and an AAHE Senior Scholar. He can be contacted at jwergin@saturn.vcu.edu.

He is the author of The Collaborative Department: How Five Campuses are Inching Toward Cultures of Collective Responsibility, a publication of the Forum on Faculty Roles & Rewards (American Association for Higher Education, 1994).



Copyright © 2008 - American Association for Higher Education and Accreditation