Thursday, 18 December 2014

Dividing up the pie in relation to REF2014

OK, I've only had an hour to look at REF results, so this will be brief, but I'm far less interested in league tables than in the question of how the REF results will translate into funding for different departments in my subject area, psychology.

I should start by thanking HEFCE, who are a model of efficiency and transparency: I was able to download a complete table of REF outcomes from their website here.

What I did was to create a table with just the Overall results for Unit of Assessment 4, which is Psychology, Psychiatry and Neuroscience (i.e. a bigger and more diverse grouping than for the previous RAE). These Overall results combine information from Outputs (65%), Impact (20%) and Environment (15%). I excluded institutions in Scotland, Wales and Northern Ireland.

Most of the commentary on the REF focuses on the so-called 'quality' rankings. These represent the average rating for an institution on a 4-point scale. Funding, however, will depend on the 'power' - i.e. the quality rankings multiplied by the number of 'full-time equivalent' staff entered in the REF. Not surprisingly, bigger departments get more money. The key things we don't yet know are (a) how much funding there will be, and (b) what formula will be used to translate the star ratings into funding.

With regard to (b), in the previous exercise, the RAE, you got one point for 2*, three points for 3* and seven points for 4*. It is anticipated that this time there will be no credit for 2* and little or no credit for 3*. I've simply computed the sums according to two scenarios: original RAE, and formula where only 4* counts. From these scores one can readily compute what percentage of available funding will go to each institution. The figures are below. Readers may find it of interest to look at this table in relation to my earlier blogpost on The Matthew Effect and REF2014.

Unit of Assessment 4:
Table showing % of subject funding for each institution depending on funding formula

Funding formula
Institution RAE 4*only
University College London 16.1 18.9
King's College London 13.3 14.5
University of Oxford 6.6 8.5
University of Cambridge 4.7 5.7
University of Bristol 3.6 3.8
University of Manchester 3.5 3.7
Newcastle University 3.0 3.4
University of Nottingham 2.7 2.6
Imperial College London 2.6 2.9
University of Birmingham 2.4 2.7
University of Sussex 2.3 2.4
University of Leeds 2.0 1.5
University of Reading 1.8 1.6
Birkbeck College 1.8 2.2
University of Sheffield 1.7 1.7
University of Southampton 1.7 1.8
University of Exeter 1.6 1.6
University of Liverpool 1.6 1.6
University of York 1.5 1.6
University of Leicester 1.5 1.0
Goldsmiths' College 1.4 1.0
Royal Holloway 1.4 1.5
University of Kent 1.4 1.0
University of Plymouth 1.3 0.8
University of Essex 1.1 1.1
University of Durham 1.1 0.9
University of Warwick 1.1 1.0
Lancaster University 1.0 0.8
City University London 0.9 0.5
Nottingham Trent University 0.9 0.7
Brunel University London 0.8 0.6
University of Hull 0.8 0.4
University of Surrey 0.8 0.5
University of Portsmouth 0.7 0.5
University of Northumbria 0.7 0.5
University of East Anglia 0.6 0.5
University of East London 0.6 0.5
University of Central Lancs 0.5 0.3
Roehampton University 0.5 0.3
Coventry University 0.5 0.3
Oxford Brookes University 0.4 0.2
Keele University 0.4 0.2
University of Westminster 0.4 0.1
Bournemouth University 0.4 0.1
Middlesex University 0.4 0.1
Anglia Ruskin University 0.4 0.1
Edge Hill University 0.3 0.2
University of Derby 0.3 0.2
University of Hertfordshire 0.3 0.1
Staffordshire University 0.3 0.2
University of Lincoln 0.3 0.2
University of Chester 0.3 0.2
Liverpool John Moores 0.3 0.1
University of Greenwich 0.3 0.1
Leeds Beckett University 0.2 0.0
Kingston University 0.2 0.1
London South Bank 0.2 0.1
University of Worcester 0.2 0.0
Liverpool Hope University 0.2 0.0
York St John University 0.1 0.1
University of Winchester 0.1 0.0
University of Chichester 0.1 0.0
University of Bolton 0.1 0.0
University of Northampton 0.0 0.0
Newman University 0.0 0.0


P.S. 11.20 a.m. For those who have excitedly tweeted from UCL and KCL about how they are top of the league, please note that, as I have argued previously, the principal determinant of the % projected funding is the number of FTE staff entered. In this case the correlation is .995.













Monday, 8 December 2014

Why evaluating scientists by grant income is stupid




©CartoonStock.com

As Fergus Millar noted in a letter to the Times last year, “in the modern British university, it is not that funding is sought in order to carry out research, but that research projects are formulated in order to get funding”.
This topsy-turvy logic has become evident in some universities, with blatant demands for staff in science subjects to match a specified quota of grant income or face redundancy. David Colquhoun’s blog is a gold-mine of information about those universities who have adopted such policies. He notes that if you are a senior figure based in the Institute of Psychiatry in London, or the medical school at Imperial College London you are expected to bring in an average of at least £200K of grant income per annum.  Warwick Medical School has a rather less ambitious threshold of £90K per annum for principal investigators and £150K per annum for co-investigators1.
So what’s wrong with that? It might be argued that in times of financial stringency, Universities may need to cut staff to meet their costs, and this criterion is at least objective. The problem is that it is stupid. It damages the wellbeing of staff, the reputation of the University, and the advancement of science.
Effect on staff 
The argument about wellbeing of staff is a no-brainer, and one might have expected that those in medical schools would be particularly sensitive to the impact of job insecurity on the mental and physical health of those they employ. Sadly, those who run these institutions seem blithely unconcerned about this and instead impress upon researchers that their skills are valued only if they translate into money. This kind of stress does not only impact on those who are destined to be handed their P45 but also on those around them. Even if you’re not worried about your own job, it is hard to be cheerfully productive when surrounded by colleagues in states of high distress. I’ve argued previously that universities should be evaluated on staff satisfaction as well as student satisfaction: this is not just about the ethics of proper treatment of one’s fellow human beings, it is also common-sense that if you want highly skilled people to do a good job, you need to make them feel valued and provide them with a secure working environment. 
Effect on the University
The focus on research income seems driven by two considerations: a desire to bring in money, and to achieve status by being seen to bring in money. But how logical is this? Many people seem to perceive a large grant as some kind of ‘prize’, a perception reinforced by the tendency of the Times Higher Education and others to refer to ‘grant-winners’. Yet funders do not give large grants as gestures of approval: the money is not some kind of windfall. With rare exceptions of infrastructure grants, the money is given to cover the cost of doing research. Even now we have Full Economic Costing (FEC) attached to research council grants, this covers no more than 80% of the costs to universities of hosting the research. Undoubtedly, the money accrued through FEC gives institutions leeway to develop infrastructure and other beneficial resources, but it is not a freebie, and big grants cost money to implement.
So we come to the effect of research funding on a University’s reputation. I assume this is a major driver behind the policies of places like Warwick, given that it is one component of the league tables that are so popular in today’s competitive culture. But, as some institutions learn to their costs, a high ranking in such tables may count for naught if a reputation for cavalier treatment of staff makes it difficult to recruit and retain the best people. 
Effect on science
The last point concerns the corrosive effect on science if the incentive structure encourages people to apply for numerous large grants. It sidelines people who want to do careful, thoughtful research in favour of those who take on more than they can cope with. There is already a glut of waste in science, with many researchers having a backlog of unpublished work which they don’t have time to write up because they are busy writing the next grant.  Four years ago I argued that we should focus on what people do with research funding rather than how much they have. On this basis, someone who achieved a great deal with modest funding would be valued more highly than someone who was failed to publish many of the results from a large grant. I cannot express it better than John Ioannidis, who in a recent paper put forward a number of suggestions for improving the reproducibility of research. This was his suggested modification to our system of research incentives:
“….obtaining grants, awards, or other powers are considered negatively unless one delivers more good-quality science in proportion. Resources and power are seen as opportunities, and researchers need to match their output to the opportunities that they have been offered—the more opportunities, the more the expected (replicated and, hopefully, even translated) output. Academic ranks have no value in this model and may even be eliminated: researchers simply have to maintain a non-negative balance of output versus opportunities.”
 
1If his web-entry is to be believed, then Warwick’s Dean of Medicine, Professor Peter Winstanley, falls a long way from this threshold, having brought in only £75K of grant income over a period of 7 years. He won’t be made redundant though, as those with administrative responsibilities are protected.

Ioannidis, J. (2014). How to Make More Published Research True PLoS Medicine, 11 (10) DOI: 10.1371/journal.pmed.1001747