Evaluating Department Achievements: Consequences for the Work of Faculty
From the December 1999 AAHE Bulletin
Some colleges have learned how to change the program assessment process from a dreaded chore to a learning experience. Here are their strategies.
See if this scenario describes what happens in your own department: Every five to seven years your unit comes up for a formal program review. Its an event that does not inspire much enthusiasm among the faculty. Theres plenty of busywork as the self-study is prepared, and theres a lot of grumbling about wasted time and bureaucratic intrusion. Everyones goal is to get through the process with a minimum of aggravation. When its over, the final report disappears into the bowels of the administration and nothing much changes; but at least the faculty wont have to worry about it again for awhile. This whole pageant is repeated, with minor variations, for regional and specialized accreditation, student outcomes assessment, and various ad hoc strategic planning initiatives.
If all this seems distressingly familiar, youre not alone. Last year I was asked by the Pew Charitable Trusts, in conjunction with AAHEs New Pathways II project, to undertake a survey of quality assurance practices in academic departments, and Ill give away one of the key findings right now: There is plenty of assessment going on. Academic departments are evaluated in as many as five different ways formal program review, student outcomes assessment, regional accreditation, specialized/professional accreditation, and performance-based budgeting. Some lucky departments are faced with all five, sometimes in the same year. But what is the cumulative impact of all this evaluation on the department and its faculty? How are all these data used internally? Are faculty work lives changing, in useful and constructive ways, or is evaluation instead making faculty work more onerous and bureaucratic? What kinds of evaluation policies and practices encourage constructive change in departments and a stronger culture of collective responsibility there? Some of the answers to these questions might surprise you.
First, however, a little background. My associate Judi Swingen and I began this project in August of 1998. We reviewed the literature, called upon personal networks of informants, and sent a mass mailing to all campus provosts. We eventually studied about 130 institutions across the Carnegie categories. From some we merely collected information on paper; for others we conducted telephone interviews with key campus participants; and for eight institutions we wrote extensive case studies based on two-day visits to the campuses.
We found widespread discontent with how institutions evaluate their academic departments. Among several roots of campus unhappiness, one was most striking: Most departments and most faculty failed to see the relevance of program evaluation and assessment to the work they did. The dominant mood on the campuses we studied was that program review and other forms of departmental assessment are largely ritualistic and time-consuming affairs, mandated from above, having few real consequences for the lives of the faculty.
Components of Effective Evaluation at the Departmental Level
Despite this rather bleak portrayal, we found some notable exceptions, places where unit evaluation informed judgments of worth and improved departmental functioning. These institutions do three things well: The organizational and cultural setting promotes a conducive atmosphere for evaluation; evaluation policies and practices are viewed as credible and fair; and evaluation criteria and standards are scrutinized carefully. Ill describe each of these characteristics in turn.
Organizational and cultural setting. Sometimes formal campus policies for assessment or program review seemed to make little difference; what did matter was effective leadership. Campuses with successful practices concerned themselves first with building an institutional climate supportive of quality improvement. For example, when the provost at a private research university with a history of successful program review was asked how he would go about initiating evaluation in another institution, he said this: "First Id take a measure of the institution and its vision for the future. Is there ambition for change? I would try to find ways of articulating a higher degree of aspiration; if there wasnt a strong appetite for this, then program review would be doomed to failure."
The elements of a "quality" institutional climate as suggested by the institutions we reviewed were these:
Evaluation policies and practices. We found that, overall, the most effective policies and practices were both flexible and decentralized. Units were invited to define for themselves the critical evaluation questions, the key stakeholders and sources of evidence, and the most appropriate analysis and interpretation procedures. This suggests that institutions should focus less on accountability for achieving certain predetermined results and more on how well units conduct evaluations for themselves and use the data these evaluations generate. Rewards then accrue to units that can show how they have used the assessment to solve problems and resolve issues. This notion is similar to "academic audit" procedures currently in widespread use in Western Europe and Hong Kong: Rather than attempting to evaluate quality itself, the focus instead is on processes believed to produce quality.
Evaluation criteria and standards. Criteria are the kinds of evidence collected as markers of quality; standards are the benchmarks against which the evidence is compared. We found many problems with the use of evidence. It wasnt that institutions lacked information that might lead to judgments about departmental quality: our database contained examples of more than 100 quality indicators, and these were distributed fairly evenly across "input" (faculty qualifications, FTEs), "process" (curriculum quality, demands on students), and "output" (faculty publications, student learning) criteria. The problem rather was that in almost no case did institutional procedures call for examining the quality of the evidence itself, and this can lead to some bizarre consequences. Here is a particularly egregious example: At one flagship university the assessment of "quality of instruction" includes these criteria: number of full-time faculty (undergraduate and graduate), total students per full-time faculty (undergraduate, masters, and doctoral), and degrees awarded per faculty (baccalaureate, masters, and doctorate). How, one might ask, do these indices qualify as markers of instructional quality? What do they have to do with the quality of student learning?
Further, we found a widespread lack of clarity and agreement about what the standards should be. For example, what is the most appropriate standard for departmental research productivity: Departmental goals negotiated earlier with the dean? Last years performance? The extent to which the scholarship fits within school priorities or the universitys strategic plan? Or how well the department stacks up against its "peer" departments in other institutions? Standards considered important or credible by one stakeholder group may not be considered important at all by another; thus, departmental quality will always be in the eye of the beholder.
Recommendations for Good Practice
What does all this suggest about what institutions could do to promote more effective unit evaluation? Here are four possibilities.
1. Be proactive in discussions of "quality."
2. Decentralize evaluation to the maximum possible extent.
3. Recognize that evaluation is not for amateurs: Address
the developmental needs of deans, chairs, and faculty.
4. Focus not just on enhancing collaboration and teamwork
but on "organizational motivation."
Simply encouraging collaboration is not likely to work; a deeper understanding is needed of faculty preferences. If the unit of assessment is to shift from individual to collective achievement, a different model of motivation is needed. One alternative might be the concept of "organizational motivation," which suggests that faculty members will contribute to the group even at the expense of self-interest if they (a) identify with their institution, and (b) think their behavior will affect the institution in a positive way. (For a fuller explanation of this concept, see B.M. Staws "Motivation Research Versus the Art of Faculty Management" in J.L. Besss College and University Organization: Insights From the Behavioral Sciences. [New York University Press, 1984].) Thus, rather than focusing all of their attention on "reward systems," university administrators might be well advised to nurture faculty members affiliation with the institution, through socialization experiences, ceremonies, and other symbolic acts; by acknowledging faculty whose work benefits the institution; and by removing existing disincentives to participation in institutional citizenship. Administrators might also consider encouraging more discourse about what the institution does and should do for faculty.
A Final Note
The way an institution sees itself is reflected in how it evaluates. Among the institutions we studied, those that were most successful with departmental and unit assessment took the long view: administrators took the time to develop a commitment to and an energy for change, and then looked to assessment to help move the change along. They charged each department with the responsibility of identifying and answering its own evaluation questions, and held each accountable for doing that effectively. They enhanced organizational motivation by sharing information and encouraging dialogue. Finally, they took seriously issues of data quality and credibility. Administrators elsewhere interested in creating a more self-regarding institution would do well to follow their example.
Jon F. Wergin is a professor of educational studies at Virginia Commonwealth University and an AAHE Senior Scholar. He can be contacted at email@example.com.
He is the author of The Collaborative Department: How Five Campuses are Inching Toward Cultures of Collective Responsibility, a publication of the Forum on Faculty Roles & Rewards (American Association for Higher Education, 1994).
Copyright © 2008 - American Association for Higher Education and Accreditation