Monday 25 February 2008

Shoot the REF?

A hotel in Dundee on a cold rainy night... the perfect opportunity to get the blog going again after too long a gap. I've been meaning to write for a while on a topic that has occupied a lot of my attention over the last 18 months, namely research assessment. With the submission of data for RAE 2008 at the end of last November, the UK academic community has finally consigned seven years of research activity into the hands of peer review panels. Even though the results of this latest exercise will not be known before December, the funding councils are already moving to establish the parameters for a replacement for the RAE - the proposed Research Excellence Framework (REF).

Universities were recently given the opportunity to respond to a consultation paper on the new REF. While most of the paper related to outline proposals to shift the balance of assessment in the so-called STEM (science, technology, engineering and medicine) subjects from peer review to 'metrics' (ie quantitative measures of performance, including citation counting), the paper also raised a number of questions about how the methodology for the arts, social sciences and humanities should be changed. Widespread concerns over the inappropriateness of bibliometrics for these disciplines appear, to a degree, to have been accepted and the thrust of the paper focuses on the questions of the kind of 'light touch' peer review that would be appropriate for these subjects, in conjunction with a possibly greater range of metric indicators than are used at present.

There must be relatively few academics and policy-makers who would not consider that the RAE has had its day. It has, I think, had some beneficial effects, but it has also distorted certain aspects of research activity, and been a massively resource intensive process. At Warwick alone the RAE 2008 has produced a university submission comprising, so our RAE team tell us, of 2296 pages. The hours put into the exercise by university and departmental research coordinators and administrators, by internal and external peer reviewers and various committee meetings must be staggering, creating I suspect a huge (lost) opportunity cost out of the whole exercise - and that's before we factor in the centralised costs to the funding councils of the assessment process itself.

So, what about the options for 2013?

The consultation has very much focused on metrics and especially bibliometrics as the primary methodology for the STEM disciplines. Even in this context, I think there are significant problems that need to be considered and it is hard to resist the view that bibliometrics are potentially a pretty bad idea, at least not without some considerable refinement. Let me give you just a couple of quite obvious concerns. First, there is already some debate about what bibliometrics actually measure. HEPI has argued quite forcefully that metrics actually assess research
impact not research quality. If funding continues to be distributed on a quality basis, this must of itself beg the question whether metrics are the appropriate primary measure. Secondly, any kind of research assessment will effect what it seeks to measure - a good methodology will maximise 'beneficial' effects (however we define them) and hopefully minimise undesirable and inefficient distortions. Bibliometrics inevitably threaten to bring in a whole new range of distortions, for example, citation counting could simply encourage departments to use co-authoring strategically to coat-tail less highly-rated researchers on the work of research stars. Similarly, will bibliometrics actually reinforce the value of star researchers and transfer market in such stars? It could work more against new and early career researchers then the existing, qualitative approach of the RAE. Work takes time to have an impact, particularly with long publication lags in many journals. How will this be taken into account? This could be of considerable longer term significance in the context of the demographic “time bomb” most universities are facing, given aging staff profiles.

The proposals for a 'light touch' peer review for the social sciences and humanities are only broadly sketched out at this stage. Even so , there are some grounds for concern, not least given the likely speed with which changes will implemented. It is hard to see how the funding councils will reconcile the ‘light touch’ ideal with their stated commitment to continue with the process of quality profiling that was introduced for RAE 2008. (That is, where each publication is rated and the department is given a research profile, showing the percentage of work at 4*, 3*, 2*, and so on). The light touch might also do more to embed or reinforce the status quo and concentrate research funding in a way that has negative consequences for the sector as a whole and for the student learning experience. The combination of detailed peer review with a range of both quantitative and qualitative inputs has facilitated recognition and reward of smaller, emergent, research cultures within institutions - essentially post-92 universities and colleges - that have not had the cultural capital or resources in the past to develop a breadth and institutional depth of research excellence. It would be unfortunate if this capacity were to be lost. Moreover, the impact of a new methodology seems very hard to assess in diversity terms at this stage. Initially at least the new methodologies are also likely to create new, or at least different, demands on institutions, both in response to the proposed greater reliance on metrics and other quantitative measures, and in the need to manage two different REF processes.

I wonder if it really is about time we all agreed enough is enough, but that's not going to happen, is it? That's the problem with the audit juggernaut, once you set it going, its very hard to stop.

No comments: