Thursday, 18 December 2014

Dividing up the pie in relation to REF2014

OK, I've only had an hour to look at REF results, so this will be brief, but I'm far less interested in league tables than in the question of how the REF results will translate into funding for different departments in my subject area, psychology.

I should start by thanking HEFCE, who are a model of efficiency and transparency: I was able to download a complete table of REF outcomes from their website here.

What I did was to create a table with just the Overall results for Unit of Assessment 4, which is Psychology, Psychiatry and Neuroscience (i.e. a bigger and more diverse grouping than for the previous RAE). These Overall results combine information from Outputs (65%), Impact (20%) and Environment (15%). I excluded institutions in Scotland, Wales and Northern Ireland.

Most of the commentary on the REF focuses on the so-called 'quality' rankings. These represent the average rating for an institution on a 4-point scale. Funding, however, will depend on the 'power' - i.e. the quality rankings multiplied by the number of 'full-time equivalent' staff entered in the REF. Not surprisingly, bigger departments get more money. The key things we don't yet know are (a) how much funding there will be, and (b) what formula will be used to translate the star ratings into funding.

With regard to (b), in the previous exercise, the RAE, you got one point for 2*, three points for 3* and seven points for 4*. It is anticipated that this time there will be no credit for 2* and little or no credit for 3*. I've simply computed the sums according to two scenarios: original RAE, and formula where only 4* counts. From these scores one can readily compute what percentage of available funding will go to each institution. The figures are below. Readers may find it of interest to look at this table in relation to my earlier blogpost on The Matthew Effect and REF2014.

Unit of Assessment 4:
Table showing % of subject funding for each institution depending on funding formula

Funding formula
Institution RAE 4*only
University College London 16.1 18.9
King's College London 13.3 14.5
University of Oxford 6.6 8.5
University of Cambridge 4.7 5.7
University of Bristol 3.6 3.8
University of Manchester 3.5 3.7
Newcastle University 3.0 3.4
University of Nottingham 2.7 2.6
Imperial College London 2.6 2.9
University of Birmingham 2.4 2.7
University of Sussex 2.3 2.4
University of Leeds 2.0 1.5
University of Reading 1.8 1.6
Birkbeck College 1.8 2.2
University of Sheffield 1.7 1.7
University of Southampton 1.7 1.8
University of Exeter 1.6 1.6
University of Liverpool 1.6 1.6
University of York 1.5 1.6
University of Leicester 1.5 1.0
Goldsmiths' College 1.4 1.0
Royal Holloway 1.4 1.5
University of Kent 1.4 1.0
University of Plymouth 1.3 0.8
University of Essex 1.1 1.1
University of Durham 1.1 0.9
University of Warwick 1.1 1.0
Lancaster University 1.0 0.8
City University London 0.9 0.5
Nottingham Trent University 0.9 0.7
Brunel University London 0.8 0.6
University of Hull 0.8 0.4
University of Surrey 0.8 0.5
University of Portsmouth 0.7 0.5
University of Northumbria 0.7 0.5
University of East Anglia 0.6 0.5
University of East London 0.6 0.5
University of Central Lancs 0.5 0.3
Roehampton University 0.5 0.3
Coventry University 0.5 0.3
Oxford Brookes University 0.4 0.2
Keele University 0.4 0.2
University of Westminster 0.4 0.1
Bournemouth University 0.4 0.1
Middlesex University 0.4 0.1
Anglia Ruskin University 0.4 0.1
Edge Hill University 0.3 0.2
University of Derby 0.3 0.2
University of Hertfordshire 0.3 0.1
Staffordshire University 0.3 0.2
University of Lincoln 0.3 0.2
University of Chester 0.3 0.2
Liverpool John Moores 0.3 0.1
University of Greenwich 0.3 0.1
Leeds Beckett University 0.2 0.0
Kingston University 0.2 0.1
London South Bank 0.2 0.1
University of Worcester 0.2 0.0
Liverpool Hope University 0.2 0.0
York St John University 0.1 0.1
University of Winchester 0.1 0.0
University of Chichester 0.1 0.0
University of Bolton 0.1 0.0
University of Northampton 0.0 0.0
Newman University 0.0 0.0


P.S. 11.20 a.m. For those who have excitedly tweeted from UCL and KCL about how they are top of the league, please note that, as I have argued previously, the principal determinant of the % projected funding is the number of FTE staff entered. In this case the correlation is .995.













Monday, 8 December 2014

Why evaluating scientists by grant income is stupid




©CartoonStock.com

As Fergus Millar noted in a letter to the Times last year, “in the modern British university, it is not that funding is sought in order to carry out research, but that research projects are formulated in order to get funding”.
This topsy-turvy logic has become evident in some universities, with blatant demands for staff in science subjects to match a specified quota of grant income or face redundancy. David Colquhoun’s blog is a gold-mine of information about those universities who have adopted such policies. He notes that if you are a senior figure based in the Institute of Psychiatry in London, or the medical school at Imperial College London you are expected to bring in an average of at least £200K of grant income per annum.  Warwick Medical School has a rather less ambitious threshold of £90K per annum for principal investigators and £150K per annum for co-investigators1.
So what’s wrong with that? It might be argued that in times of financial stringency, Universities may need to cut staff to meet their costs, and this criterion is at least objective. The problem is that it is stupid. It damages the wellbeing of staff, the reputation of the University, and the advancement of science.
Effect on staff 
The argument about wellbeing of staff is a no-brainer, and one might have expected that those in medical schools would be particularly sensitive to the impact of job insecurity on the mental and physical health of those they employ. Sadly, those who run these institutions seem blithely unconcerned about this and instead impress upon researchers that their skills are valued only if they translate into money. This kind of stress does not only impact on those who are destined to be handed their P45 but also on those around them. Even if you’re not worried about your own job, it is hard to be cheerfully productive when surrounded by colleagues in states of high distress. I’ve argued previously that universities should be evaluated on staff satisfaction as well as student satisfaction: this is not just about the ethics of proper treatment of one’s fellow human beings, it is also common-sense that if you want highly skilled people to do a good job, you need to make them feel valued and provide them with a secure working environment. 
Effect on the University
The focus on research income seems driven by two considerations: a desire to bring in money, and to achieve status by being seen to bring in money. But how logical is this? Many people seem to perceive a large grant as some kind of ‘prize’, a perception reinforced by the tendency of the Times Higher Education and others to refer to ‘grant-winners’. Yet funders do not give large grants as gestures of approval: the money is not some kind of windfall. With rare exceptions of infrastructure grants, the money is given to cover the cost of doing research. Even now we have Full Economic Costing (FEC) attached to research council grants, this covers no more than 80% of the costs to universities of hosting the research. Undoubtedly, the money accrued through FEC gives institutions leeway to develop infrastructure and other beneficial resources, but it is not a freebie, and big grants cost money to implement.
So we come to the effect of research funding on a University’s reputation. I assume this is a major driver behind the policies of places like Warwick, given that it is one component of the league tables that are so popular in today’s competitive culture. But, as some institutions learn to their costs, a high ranking in such tables may count for naught if a reputation for cavalier treatment of staff makes it difficult to recruit and retain the best people. 
Effect on science
The last point concerns the corrosive effect on science if the incentive structure encourages people to apply for numerous large grants. It sidelines people who want to do careful, thoughtful research in favour of those who take on more than they can cope with. There is already a glut of waste in science, with many researchers having a backlog of unpublished work which they don’t have time to write up because they are busy writing the next grant.  Four years ago I argued that we should focus on what people do with research funding rather than how much they have. On this basis, someone who achieved a great deal with modest funding would be valued more highly than someone who was failed to publish many of the results from a large grant. I cannot express it better than John Ioannidis, who in a recent paper put forward a number of suggestions for improving the reproducibility of research. This was his suggested modification to our system of research incentives:
“….obtaining grants, awards, or other powers are considered negatively unless one delivers more good-quality science in proportion. Resources and power are seen as opportunities, and researchers need to match their output to the opportunities that they have been offered—the more opportunities, the more the expected (replicated and, hopefully, even translated) output. Academic ranks have no value in this model and may even be eliminated: researchers simply have to maintain a non-negative balance of output versus opportunities.”
 
1If his web-entry is to be believed, then Warwick’s Dean of Medicine, Professor Peter Winstanley, falls a long way from this threshold, having brought in only £75K of grant income over a period of 7 years. He won’t be made redundant though, as those with administrative responsibilities are protected.

Ioannidis, J. (2014). How to Make More Published Research True PLoS Medicine, 11 (10) DOI: 10.1371/journal.pmed.1001747

Friday, 28 November 2014

Metricophobia among academics

Most academics loathe metrics. I’ve seldom attracted so much criticism as for my suggestion that a citation-based metric might be used to allocate funding to university departments. This suggestion was recycled this week in the Times Higher Education, after a group of researchers published predictions of REF2014 results based on departmental H-indices for four subjects.

Twitter was appalled. Philip Moriarty, in a much-retweeted plea said: “Ugh. *Please* stop giving credence to simplistic metrics like the h-index. V. damaging”. David Colquhoun, with whom I agree on many things, responded like an exorcist confronted with the spawn of the devil, arguing that any use of metrics would just encourage universities to pressurise staff to increase their H-indices.

Now, as I’ve explained before, I don’t particularly like metrics. In fact, my latest proposal is to drop both REF and metrics and simply award funding on the basis of the number of research-active people in a department.  But I‘ve become intrigued by the loathing of metrics that is revealed whenever a metrics-based system is suggested, particularly since some of the arguments put forward do seem rather illogical.

Odd idea #1 is that doing a study relating metrics to funding outcomes is ‘giving credence’ to metrics. It’s not. What would give credence would be if the prediction of REF outcomes from H-index turned out to be very good. We already know that whereas it seems to give reasonable predictions for sciences, it’s much less accurate for humanities. It will be interesting to see how things turn out for the REF, but it’s an empirical question.

Odd idea #2 is that use of metrics will lead to gaming. Of course it will! Gaming will be a problem for any method of allocating money. The answer to gaming, though, is to be aware of how this might be achieved and to block obvious strategies, not to dismiss any system that could potentially be gamed. I suspect the H-index is less easy to game than many other metrics - though I’m aware of one remarkable case where a journal editor has garnered an impressive H-index from papers published in his own journals, with numerous citations to his own work. In general, though, those of us without editorial control are more likely to get a high H-index from publishing smaller amounts of high-quality science than churning out pot-boilers.

Odd idea #3 is the assumption that the REF’s system of peer review is preferable to a metric. At the HEFCE metrics meeting I attended last month, almost everyone was in favour of complex, qualitative methods of assessing research. David Colquhoun argued passionately that to evaluate research you need to read the publications. To disagree with that would be like slamming motherhood and apple pie. But, as Derek Sayer has pointed out, it is inevitable that the ‘peer review’ component of the REF will be flawed, given that panel members are required to evaluate several hundred submissions in a matter of weeks. The workload is immense and cannot involve the careful consideration of the content of books or journal articles, many of which will be outside the reader’s area of expertise.

My argument is a pragmatic one: we are currently engaged in a complex evaluation exercise that is enormously expensive in time and money, that has distorted incentives in academia, and that cannot be regarded as a ‘gold standard’. So, as an empirical scientist, my view is that we should be looking hard at other options, to see whether we might be able to achieve similar results in a more cost-effective way.

Different methods can be compared in terms of the final result, and also in terms of unintended consequences. For instance, in its current manifestation, the REF encourages universities to take on research staff shortly before the deadline – as satirised by Laurie Taylor (see Appointments section of this article). In contrast, if departments were rewarded for a high H-index, there would be no incentive for such behaviour. Also, staff members who were not principal investigators but who made valuable contributions to research would be appreciated, rather than threatened with redundancy.  Use of an H-index would also avoid the invidious process of selecting staff for inclusion in the REF.

I suspect, anyhow, we will find predictions from the H-index are less good for REF than for RAE. One difficulty for Mryglod et al that it is not clear whether the Units of Assessment they base their predictions on will correspond to those used in REF. Furthermore, in REF, a substantial proportion of the overall score comes from impact, evaluated on the basis of case studies. To quote from the REF2014 website: “Case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe.” My impression is that impact was included precisely to capture an aspect of academic quality that was orthogonal to traditional citation-based metrics, and so this should weaken any correlation of outcomes with H-index.

Be this as it may, I’m intrigued by people’s reactions to the H-index suggestion, and wondering whether this relates to the subject one works in. For those in arts and humanities, it is particularly self-evident that we cannot capture all the nuances of departmental quality from an H-index – and indeed, it is already clear that correlations between H-index and RAE outcomes are relatively low these disciplines. These academics work in fields where complex, qualitative analysis is essential. Interestingly, RAE outcomes in arts and humanities (as with other subjects) are pretty well predicted by departmental size, and it could be argued that this would be the most effective way of allocating funds.

Those who work in the hard sciences, on the other hand, take precision of measurement very seriously. Physicists, chemists and biologists, are often working with phenomena that can be measured precisely and unambiguously. Their dislike for an H-index might, therefore, stem from awareness of its inherent flaws: it varies with subject area and can be influenced by odd things, such as high citations arising from notoriety.

Psychologists, though, sit between these extremes. The phenomena we work with are complex. Many of us strive to treat them quantitatively, but we are used to dealing with measurements that are imperfect but ‘good enough’. To take an example from my own research. Years ago I wanted to measure the severity of children’s language problems, and I was using an elicitation task, where the child was shown pictures and asked to say what was happening. The test had a straightforward scoring system that gave indices of the maturity of the content and grammar of the responses. Various people, however, criticised this as too simple. I should take a spontaneous language sample, I was told, and do a full grammatical analysis. So, being young and impressionable I did. I ended up spending hours transcribing tape-recordings from largely silent children, and hours more mapping their utterances onto a complex grammatical chart. The outcome: I got virtually the same result from the two processes – one which took ten minutes and the other which took two days.

Psychologists evaluate their measures in terms of how reliable (repeatable) they are and how validly they do what they are supposed to do. My approach to the REF is the same as my approach to the rest of my work: try to work with measures that are detailed and complex enough to be valid for their intended purpose, but no more so. To work out whether a measure fits that bill, we need to do empirical studies comparing different approaches – not just rely on our gut reaction.

Friday, 24 October 2014

Blaming universities for our nation's woes


©CartoonStock.com
In black below is the text of a comment piece in the Times Higher Education by Jamie Martin, advisor to Michael Gove, on Higher Education in the UK entitled “Must Do Better”. In red are my thoughts on his arguments.


In an increasingly testing global race, Britain’s competitive advantage must be built on education.
What is this ‘increasingly testing global race’? Why should education be seen as part of an international competition rather than a benefit to all humankind?
Times Higher Education’s World University Rankings show that we have three of the world’s top 10 universities to augment our fast-improving schools. Sustaining a competitive edge, however, requires constant improvement and innovation. We must ask hard questions about our universities’ failures on academic rigour and widening participation, and recognise the need for reform.
Well, this seems a rather confused message. On the one hand, we are doing very well, but on the other hand we urgently need to reform.
Too many higher education courses are of poor quality. When in government, as special adviser to Michael Gove, I was shown an analysis indicating that around half of student loans will never be repaid. Paul Kirby, former head of the Number 10 Policy Unit, has argued that universities and government are engaging in sub-prime lending, encouraging students to borrow about £40,000 for a degree that will not return that investment. We lend money to all degree students on equal terms, but employers don’t perceive all university courses as equal. Taxpayers, the majority of whom have not been to university, pick up the tab when this cruel lie is exposed.
So let’s get this right. The government introduced a massive hike in tuition fees (£1,000 per annum in 1998, £3,000 p.a. in 2004, £9,000 p.a. in 2010). The idea was that people would pay for these with loans which they would pay off when they were earning above a threshold. It didn’t work because many people didn’t get high-paying jobs and now it is estimated that 45% of loans won’t be repaid.
Whose fault is this? The universities! You might think the inability of people to pay back loans is a consequence of lack of jobs due to recession, but, no, the students would all be employable if only they had been taught different things!  
With the number of firsts doubling in a decade, we need an honest debate about grade inflation and the culture of low lecture attendance and light workloads it supports. Even after the introduction of tuition fees, the Higher Education Policy Institute found that contact time averaged 14 hours a week and degrees that were “more like a part-time than a full-time job”. Unsurprisingly, many courses have tiny or even negative earnings premiums and around half of recent graduates are in non-graduate jobs five years after leaving.
An honest debate would be good. One that took into account the conclusions of this report by ONS which states: “Since the 2008/09 recession, unemployment rates have risen for all groups but the sharpest rise was experienced by non-graduates aged 21 to 30.”  This report does indeed note the 47% of recent graduates in non-graduate jobs, but points out two factors that could contribute to the trend: the increased number of graduates and decreased demand for graduate skills. There is no evidence that employers are preferring non-graduates to graduates for skilled jobs: rather there is a mismatch between the number of graduates and the number of skilled jobs.
This is partly because the system lacks diversity. Too many providers are weak imitations of the ancient universities. We have nothing to rival the brilliant polytechnics I saw in Finland, while the development of massive online open courses has been limited. The exciting New College of the Humanities, a private institution with world-class faculty, is not eligible for student loans. More universities should focus on a distinctive offer, such as cheaper shorter degrees or high-quality vocational courses.
What an intriguing wish-list: Finnish polytechnics, MOOCs, and the New College of the Humanities, which charges an eye-watering £17,640 for full-time undergraduates in 2014-15.  The latter might be seen as ‘exciting’ if you are interested in the privatisation of the higher education sector, but for those of us interested in educating the UK population, it seems more of an irrelevance – likely to become a finishing school for the children of oligarchs, rather than a serious contender for educating our populace.
If the failures on quality frustrate the mind, those on widening participation perturb the heart. Each year, the c.75,000 families on benefits send fewer students to Oxbridge than the c.100 families whose children attend Westminster School. Alan Milburn’s Social Mobility and Child Poverty Commission found that the most selective universities have actually become more socially exclusive over the past decade.
Flawed admissions processes reinforce this inequality. Evidence from the US shows that standardised test scores (the SAT), which are a strong predictor of university grades, have a relatively low correlation with socio-economic status. The high intelligence that makes you a great university student is not the sole preserve of the social elite. The AS modules favoured by university admissions officers have diluted A-level standards and are a poorer indicator of innate ability than standardised tests. Universities still prioritise performance in personal statements, Ucas forms and interviews, which correlate with helicopter parents, not with high IQ.
Criticise their record on widening access, and universities will blame the failures of the school system. Well, who walked on by while it was failing? Who failed to speak out enough about the grade inflation that especially hurt poorer pupils with no access to teachers who went beyond weakened exams? Until Mark Smith, vice-chancellor of Lancaster University, stepped forward, Gove’s decision to give universities control of A-level standards met with a muted response.
Ah, this is interesting. After a fulmination against social inequality in university admissions (well, at last a point I can agree on), Jamie Martin notes that there is an argument that blames this on failures in the school system. After all, if “The high intelligence that makes you a great university student is not the sole preserve of the social elite”, why aren’t intelligent children from working class backgrounds coming out of school with good A-levels? Why are parents abandoning the state school system? Martin seems to accept this is valid, but then goes on to argue that lower-SES students don’t get into university because everyone has good A-levels (grade inflation) – and that’s all the fault of universities for not ‘speaking out’. Is he really saying that if we had more discriminating A-levels, then the lower SES pupils would outperform private school pupils?
The first step in a prioritisation of education is to move universities into an enlarged Department for Education after the general election. The Secretary of State should immediately commission a genuinely independent review to determine which degrees are a sound investment or of strategic importance. Only these would be eligible for three-year student loans. Some shorter loans might encourage more efficient courses. Those who will brand this “philistinism” could not be more wrong: it is the traditional academic subjects that are valued by employers (philosophy at the University of Oxford is a better investment than many business courses). I am not arguing for fewer people to go to university. We need more students from poorer backgrounds taking the best degrees.
So, more reorganisation. And somehow, reducing the number of courses for which you can get a student loan is going to increase the number of students from poorer backgrounds who go to university. Just how this magic is to be achieved remains unstated.
Government should publish easy-to-use data showing Treasury forecasts on courses’ expected loan repayments, as well as quality factors such as dropout rates and contact time. It should be made much easier to start a new university or to remodel existing ones.
So here we come to the real agenda. Privatisation of higher education.
Politicians and the Privy Council should lose all control of higher education. Student choice should be the main determinant of which courses and institutions thrive.
Erm, but two paragraphs back we were told that student loans would only be available for those courses which were ‘a sound investment or of strategic importance’.
Universities should adopt standardised entrance tests. And just as private schools must demonstrate that they are worthy of their charitable status, universities whose students receive loans should have to show what action they are taking to improve state schools. The new King’s College London Maths School, and programmes such as the Access Project charity, are models to follow.
So it’s now the responsibility of universities, rather than the DfE to improve state schools?
The past decade has seen a renaissance in the state school system, because when tough questions were asked and political control reduced, brilliant teachers and heads stepped forward. It is now the turn of universities to make Britain the world’s leading education nation.
If there really has been a renaissance, the social gradient should fix itself, because parents will abandon expensive private education, and children will leave state schools with a raft of good qualifications, regardless of social background. If only….
With his ‘must do better’ arguments, Martin adopts a well-known strategy for those who wish to privatise public services: first of all starve them of funds, then heap on criticism to portray the sector as failing so that it appears that the only solution is to be taken over by a free market.  The NHS has been the focus of such a campaign, and it seems that now the attention is shifting to higher education. But here Martin has got a bit of a problem. As indicated in his second sentence, we are actually doing surprisingly well, with our publicly-funded universities competing favourably with the wealthy private universities in the USA.


PS. For my further thoughts on tuition fees in UK universities, see here.