Overview

The Public Outreach Project is the Center’s effort at generating pressure – from beyond anthropology – for disciplinary change. It strives to implement an insight from Why a Public Anthropology?:

Trends that emphasize public engagement have an episodic history within cultural anthropology. They arise repeatedly, indicating an undercurrent of support for them. But they repeatedly fail to transform the field. If we are to move beyond this cyclic process that repeatedly appears promising but rarely brings the needed change, we must face up to an unpleasant fact. Anthropologists are not that motivated to transform the system. . . . Whatever the needs of the larger society, whatever the dysfunctions of the present system, it works well enough for those within cultural anthropology most of the time. It may be broken from the perspective of the larger society. But it is not that broken from the perspective of those within the field.
 
That is why it is critical to move the impetus for change outside the field, to involve those beyond the discipline. Without outside pressure, the field will likely remain stuck within its present academic orientations.

In 2006, the Public Outreach Project offered a preliminary ranking of public outreach efforts in American anthropology doctoral departments. It will be presenting a more refined ranking in 2011. Whatever objections universities and departments have to various ranking systems, it is clear rankings grab public attention. They force universities to lean in directions that will likely increase their rankings.

The “gold standard” for academic rankings are those conducted by the National Research Council (NRC). The 1993/95 NRC rankings – which constituted the model for the Center’s 2006 ranking – was based primarily on a department’s reputation as assessed by other scholars. Nonetheless, the 1993/95 rankings possessed certain problems. A department’s positive (or negative) reputation, for instance, might linger on well after the department had changed. The larger departments were often ranked higher if only because, with their large faculties, they tended to have more prominent scholars. The 1993/95 NRC rankings also over emphasized minor differences between departments. Departments which might be separated by only a .01 were given distinct rankings. Positively, however, each department had a precise national rank. Senior administrators could compare the national rankings of different departments at their universities and decide which departments should be favored financially.

The Center’s 2006 Outreach Project was also less than perfect. First, it involved an enormous amount of labor to set up and conduct. Second, the criteria for assessing public outreach were flawed. They included: (1) the number of programs associated with a department that focused on public issues; (2) the number and types of public outreach activities that individual faculty members within a department chose to describe; and (3) the degree to which individual faculty members within a department were cited in prominent printed media. Having departments sponsor outreach programs is not the same as faculty and students actively engaging in outreach activities. Nor should we rely on the self-reporting of faculty to assess their outreach activities. Third, while there were some positive ripples from the 2006 ranking, relatively few administrators appeared to take serious note of it.

The National Research Council’s 2010 rankings offered much food for thought as a model for the Center’s 2011 ranking. The NRC moved away from reputational rankings toward more “data-based assessments” in which scholars were asked to delineate the criteria they used for assessing reputation. The NRC then assessed these criteria directly—taking note of such factors as the number of faculty awards and publications, student completion rates, and the degree to which graduates obtained jobs. Departments had several ranks—one for each criterion. It was an impressive achievement. But the 2010 rankings faced a new problem. Senior administrators now lacked an easily perceived rank to assess one department against another. The rankings seemed to confuse many. What the new rankings gained in accuracy, they lost in clarity.

The 2011 Public Outreach Project builds on these earlier efforts. First, the Center will publicize the rankings to a wide public audience – one that includes state and federal legislators as well as the media. Second it will focus on a single criterion than can readily be evaluated – the citation of a department’s faculty in the public media as assessed by the Google News Archive. This means that the ranking process is transparent. Anyone can collect the relevant data for evaluating an individual or a department and compare it with data on other departments. The approach contrasts with one used by U.S. News and World Report – in which a school’s ranking is derived from a variety of criteria that are combined with arbitrarily assigned weights that may or may not make sense. Third, focusing on a single criterion offers a clarity that harried administrators can perceive and, importantly, use in allocating funding and promoting change.

The degree to which faculty are cited in the public media presents a reasonable sense of which faculty are (and are not) garnering public attention. University administrators frequently highlight how their universities serve the public good – not only to cast a positive public image but, equally important, to insure outside support. The criterion parallels journal citation counts frequently used in academia but without the tendency toward citation inflation. It fits with what many perceive anthropologists and academics should be doing in their research – promoting the public good in public ways.