Assessing the Real College Experience
The architects of the National Survey of Student Engagement talk about the meaning behind the numbers

From the January 2001 AAHE Bulletin

An interview by Barbara Cambridge

 


The National Survey of Student Engagement is an innovative national project that supplies colleges and universities with data enabling them to confirm their notions about what they do well and what they might improve regarding student engagement. It also enables them to compare themselves with groups of institutions like themselves, and to make easily understood data available to their faculty, administrators, students, and multiple publics.

I interviewed three central figures in the development of NSSE to learn about its first findings and future plans.

  • Russ Edgerton, director of the Pew Forum on Undergraduate Learning, chairs the NSSE National Advisory Board and championed the original idea for a national survey of student engagement in learning.
  • Peter Ewell, senior associate at the National Center for Higher Education Management Systems, led the design team that developed the NSSE survey instrument. He is also a member of the NSSE National Advisory Board and chairs the NSSE Technical Advisory Panel.
  • George Kuh, a professor of higher education in the Center for Postsecondary Research and Planning at Indiana University Bloomington, was a member of the design team that developed the survey instrument. He directs the NSSE project.

Edgerton, Ewell, and Kuh discuss their ideas about the value of NSSE and its possible import for higher education.

Survey Background
Benchmarks from the 2000 National Survey of Student Engagement

Highlights of the National Survey of Student Engagement

Promising and Disappointing Findings of the National Survey of Student Engagement

Cambridge: NSSE offers a new way to gather important data about student engagement and learning. What are the key features of this national survey?

Kuh: NSSE focuses on the kinds of educational practices that institutions should be using to enhance student learning. Buttressing this key feature is that clear empirical links go to just about all the questions that show up on the instrument and to each of the benchmarks; the survey is derived from the best that we know from research about student learning. Other attributes include the random sampling, the targeting of first-year students and of seniors, and the possibility of using this as a longitudinal data collection instrument.

Ewell: I would say that the distinctive features could be sharpened into three areas. For one, the instrument, as George says, is very focused, very short, and very research-grounded. Second, a third-party administration system is unusual among these kinds of data collection efforts: It results in a more homogenous response rate from students and enhances the credibility of the data. Third, the use of benchmarks is distinctive; I know we’ll talk about that later.

Cambridge: What need does the NSSE address? What was its origin?

Edgerton: At the Pew Charitable Trusts, I put together a meeting in early 1998 to think hard about the ratings and rankings of colleges and universities. Almost all of these systems measure resources and reputation rather than contributions to student learning, so we focused our discussion on what could be done to create an alternative. What new source of evidence might be developed? Interestingly, as we developed the mechanism, we discovered that the survey needed to have improvement as a first goal, with assessment and accountability as a longer-term goal.

Ewell: This invitation from Russ piqued our interest because at the National Center for Higher Education Management Systems we had been advocating the notion of using good practice measures of various kinds, including student responses and student testimony about the use of educational good practices, ever since proposals about national testing came from the federal government during the national education goals process. We said, "Boy, this is a really neat opportunity to develop a national data collection system that might head off some of the more ill-conceived kinds of accountability measures being considered in some quarters."

Cambridge: I’m interested in the way that it started with accountability, turned to being an instrument for improvement, and returned to accountability. Is one of the two more important, or will there always be a movement between the two?

Ewell: One of the things that does distinguish the NSSE enterprise from other forms of surveys about student experience is its clear public focus. It tries to shift the nature of public conversations about accountability and educational quality toward the right kinds of things, away from resources and reputation and toward actual good practice in undergraduate education. This was always a keystone of the project.

Cambridge: Looking at the NSSE questions, I imagine it’s benefiting at least four groups: institutions, accreditors, students, and states. Let’s turn first to institutions. Are there three or four findings that institutions so far have found useful? And even more importantly, are they using the results? I know it may be early on to have much to say about that last one, but actual uses are important to know about.

Kuh: Institutions tend to look for different things in the data. Although it’s early to talk about how institutions are responding to specific findings, I can provide a few points that are worth considering.

First, almost every institutional report that goes back to the colleges and universities confirms something about the institution’s mission. For example, the profile for denominational colleges reveals a heavy emphasis on values orientation and the distinctiveness of each school. This affirmation of mission differentiation seems important to campuses working to carve out their identities.

Second, institutions have expressed a keen interest in looking at how well they do in reference to their own peer or reference group. One very interesting possibility is that institutions will create consortia in which to compare themselves, a smaller group than, say, a Carnegie classification, a group that is much more alike in terms of mission, student characteristics, and so forth.

Ewell: Barbara, I need to underline George’s point about consortia as being a way of "squaring the circle" with regard to accountability versus improvement. It is a way of moving beyond the institution toward collective responsibilities for good practice while at the same time not invading institutional privacy.

Kuh: The most common response on campuses to receiving the NSSE reports has been the generation of internal reports to lead to action on the part of the campus. For example, one high-performing institution performed well across the benchmarks on all but one area, an outcome that corroborated other institutional data and pushed the institution to have confidence to take action in that area.

A specific example of a response involves Southwest Texas State. Bob Smallwood, the institutional contact for NSSE, has been marching these data around his campus, to the president’s cabinet, the annual academic department chair retreat, the academic advisors. Bob says that the process has been tremendously yeasty, supplying lots of grist for the mill. Now he has gone to the Board of Regents of the Texas State Colleges. Because the University of Texas system used the NSSE last spring and the A&M system will do so in 2001, Smallwood hosted a workshop that drew people from two dozen Texas-based public and private colleges and universities. There is a keen interest in that state. Bob has done a lot to focus many people on these effective educational practices described in the survey.

  Benchmarks from the 2000 National Survey of Student Engagement
  • Level of academic challenge — time spent preparing for class, amount of reading and writing, and institutional expectations for academic performance
  • Active and collaborative learning — participating in class, working collaboratively with other students inside and outside of class, and tutoring
  • Student interactions with faculty members — talking with faculty members and advisors, discussing ideas from classes with faculty members outside of class, getting prompt feedback on academic performance, and working with a faculty member on a research project
  • Enriching educational experiences — talking with students with different racial or ethnic backgrounds or with different political opinions or values, using electronic technology, and participating in such activities as internships, community service, study abroad, co-curricular activities, or a culminating senior experience
  • Supportive campus environment — the extent to which students perceive the campus helps them succeed academically and socially, assists them in coping with non-academic responsibilities, and promotes supportive relations between students and their peers, faculty members, and administrative personnel and offices
 
Cambridge: The benchmarks for the survey seem to me crucial to the usefulness of survey results (see box, above). The five you selected are so important to undergraduate education. Is the expectation that institutions will work toward excelling in all five areas?

Kuh: Both strong and not-so-strong performance meet the accountability function, but for improvement, schools, I think, are going to focus on areas where they are below par with their peers. The good news is that many schools look at least strong enough to be competitive with their peers on at least some of the items.

I don’t think it is possible for any institutions, given the nature of their students and the wide variety of important educational activities that this instrument addresses, to be at the top in every category. On the other hand, the key is not that an institution is very, very strong in one area but whether students are engaged across the board on a meaningful level.

Ewell: Certainly institutions should be expected to do better in some areas than others. But they ought to be paying attention to all five areas. If they drop below expectations on the benchmarks, the bad news is as useful as the good news because they know what they need to work on.

Kuh: One way to proceed is to go to various groups on campus to ask, "At what level should our institution be performing in this area, given our students and the academic mission of this particular department, unit, or the overall institution?" That’s the kind of discussions that we hope the data will prompt on respective campuses.

Ewell: That’s another really good plug for peer analysis and consortium work because each participant will know what levels of attainment are possible based on institutions like them.

Cambridge: Are you doing anything to help institutions think about how to work with a consortium, or are you leaving that to institutions to do?

Kuh: Well, it’s fundamentally an institutional question, but your query is exactly right. A hallmark of our effort has been to stay close to participating schools, letting them know the process and asking them for feedback on the quality of our reports. We’ve had telephone debriefings, meetings at national and regional conferences, and focus groups. As new schools come on board, we’re helping by supplying frequently asked questions and uses of data from schools already using the survey.

Cambridge: Moving to another group that might benefit from NSSE results. What about regional accreditors? Would the categories on the NSSE be useful for determining, for example, educational effectiveness?

Ewell: Certainly the NSSE could provide a vehicle for accreditors to ask campuses for more focus on educational effectiveness and good practice. The survey could illuminate the kinds of questions that make the campus’s inquiry into its own practices more precise.

Kuh: Let me jump in here. If an institution thinks that it is distinctive at some things and not at others, NSSE can confirm or disconfirm whether that is, in fact, the case. I see this, in terms of accreditation, as a wonderful tool for campuses to get clearer about both the outcomes that they care more about and the extent to which they have the processes in their environment to produce those outcomes.

Ewell: A group of campuses might even want to do this together as they come up for accreditation, forming a ready-made consortium where they could share results. I’ve talked about this with the Western Association of Schools and Colleges [one of the regional accrediting agencies], which appears quite interested.

Kuh: One of the more important internal uses of NSSE data is pointing out areas that ought to be researched or assessed in more depth. The NSSE survey by itself can’t yield the level of detail that the institution needs, but it will certainly identify very quickly areas that need attention.

Cambridge: Let’s move from institutions and accreditors to the students themselves. Why would students want to take this survey seriously?

Ewell: I’ll take a first crack at that. Students ought to be asking certain questions about their own education, a kind of self-monitoring. That’s a direct benefit; the indirect one is that the testimony of students will help the institution get better, and therefore benefit present and future students. Most of the time nobody really asks students what the heck is going on at an institution. Students are often flattered to be asked that question.

Kuh: In focus groups this last spring, students were very thoughtful and reflective when they talked about the survey. Another point is that in reading about the educationally enriching practices, like study abroad, interdisciplinary course work, and internships, some students start thinking about the opportunities that are or should be available to them.

Ewell: Yes, and another potential of these kinds of questions is in shaping what parents and prospective students ought to be inquiring about any college that they visit.

Cambridge: In the wake of the recent state report cards, which stated that we don’t even know enough about student learning at colleges and universities to rate them, states may be looking for ways to determine and document student learning. If a state were to come to you with the question "What would the NSSE offer that something like statewide testing, some other national survey, or some other data collection system does not?", how would you answer?

Ewell: This is something that the states actually might get implemented, as opposed to standardized testing, which they often propose but rarely happens. It is relatively inexpensive, has some precedent so that schools don’t run away screaming, and faculty will not run to the barricades about intrusiveness. It’s a practical answer to accountability.

The best answer, however, is that the NSSE focuses on things that institutions can do something about. A good educational practice is something that any campus can engage in. The questions point to practices that are quite clear, that have policy handles for action, and that campus leadership can push and really do something with.

Edgerton: The key point is that outcomes testing doesn’t give you information that you can do anything about if you’re at the campus level. The NSSE helps you explain the performance, not just record the performance. Of course, you’d want a mix of strategies to get at a multifaceted picture of what is going on on the campus for accountability. But the fact that NSSE focuses on the processes that produce the outcomes means that a campus can really do something with the information it gets.

Cambridge: How does the consortia idea fit when we’re talking about the state level? Would NSSE prompt consortia of like kinds of institutions within a state? Across state lines? Is there any way to fit state needs and consortia together?

Kuh: With forethought. A state can act like a consortium because it can add as many as 20 additional questions to the regular set. Any consortium can do that. Consortia forming right now are developing additional questions specific to their consortium. At least six schools need to be in a consortium for it to work. During 2000 and 2001, there will be about 500 different institutions using NSSE, so there is a rich set of four-year colleges to choose from.

Ewell: We’re really in the beginning stages of working with states on how they can best use this information, though. I think that is one of the things we really want to work on.

Cambridge: How can a college participate in the next survey?

Kuh: Although we can’t accept new schools in the Pew-subsidized survey administration, we can add institutions that can pay what we estimate is full cost. That is about $7 to $8 per student. Anyone interested can contact me.

Edgerton: We do need to say that the current administration is for four-year schools. The good news is that the Pew Charitable Trusts will soon underwrite a kind of daughter of NSSE in the community college sector. We care deeply about providing this same basis for understanding student engagement to two-year schools.

Cambridge: Any last words about why you have committed yourself to be a part of this project?

Ewell: Of all the many agendas trying to improve undergraduate education that we’ve been involved in, this is among the most concrete. I feel very good about it in that regard.

Kuh: This work will be worthwhile as frontline faculty get their hands on the data and use it to continue conversations about teaching and learning. This plays hand in glove with the emphasis on the scholarship of teaching and learning, anchoring those conversations.

Edgerton: Sandy Astin [director of the Higher Education Research Institute, University of California, Los Angeles] said to me early on in the project, "Don’t think of this as a survey. Think of it as a way to give voice to an agenda and to deepen our understanding of how to improve undergraduate education." I’d put a further point on this: We need to educate the public, students, and parents about what questions they ought to be asking about higher education. NSSE asks some of those right questions.

Cambridge: Thanks for your answers and for providing leadership in this important effort.


Survey Background

The recently released results of the first National Survey of Student Engagement (NSSE) summarizes the views of 63,000 first-year and senior students at 276 four-year colleges and universities about the extent to which they participate in classroom and campus activities that research studies show are important to learning.

Survey results will be used to construct data for determining how effectively colleges are contributing to learning. These benchmarks are the level of academic challenge, active and collaborative learning, student-faculty interactions, enriching educational experiences, and supportive campus environments. (Click here for more information on the benchmarks.)

Cosponsored by the Pew Forum on Undergraduate Learning and the Carnegie Foundation for the Advancement of Teaching, the survey will be conducted annually by a professional survey research center.

For more on the National Survey of Student Engagement, including information on how to participate in future surveys, see the NSSE website at www.indiana.edu/~nsse.

Highlights of the National Survey of Student Engagement
  • Colleges vary widely in the extent to which their students are involved in effective educational practices.
  • Liberal arts colleges, as a group, score higher than all other types of colleges in every area of effective practice that is measured.
  • Similar kinds of colleges vary in their performance. For example, while many small colleges provide very engaging environments, many other small colleges do not.
  • Colleges tend to be strong in particular areas rather than across the board. Of all the colleges and universities participating in the survey, only four scored in the top 20 percent on all five benchmarks.
Promising and Disappointing Findings of the National Survey of Student Engagement
  • Most students, 79 percent, say that their institution expects them to study a significant amount. Few students actually meet this expectation. Fewer than 15 percent come close to following the long-established convention of studying two hours outside class for every hour in class. More than half (55 percent) spend only one hour or less for every class hour.
  • A commendable 63 percent of seniors participated in community service or did
  • volunteer work. More than two-fifths, 41 percent, were involved in a community-based project as part of a regular course.
  • Unfortunately, 19 percent of first-year students "never" made a class presentation and 46 percent "never" discussed readings or ideas with a faculty member outside class.
  • A notable number, more than 45 percent, of first-year and senior students reported they "often" or "very often" had serious conversations with students of a different racial or ethnic group.




Copyright © 2008 - American Association for Higher Education and Accreditation