Showing posts with label university. Show all posts
Showing posts with label university. Show all posts

Thursday, 12 October 2023

When privacy rules protect fraudsters

 

 
I was recently contacted with what I thought was a simple request: could I check the Oxford University Gazette to confirm that a person, X, had undergone an oral examination (viva) for a doctorate a few years ago. The request came indirectly from a third party, Y, via a colleague who knew that on the one hand I was interested in scientific fraud, and on the other hand, that I was based at Oxford.

My first thought was that this was a rather cumbersome way of checking someone's credentials. For a start, as Y had discovered, you can consult the on-line University Gazette only if you have an official affiliation with the university. In theory, when someone has a viva, the internal examiner notifies the University Gazette, which announces details in advance so that members of the university can attend if they so wish. In practice, it is vanishingly rare for an audience to turn up, and the formal notification to the Gazette may get overlooked.

But why, I wondered, didn't Y just check the official records of Oxford University listing names and dates of degrees? Well, to my surprise, it turned out that you can't do that. The university website is clear that to verify someone's qualifications you need to meet two conditions. First, the request can only be made by "employers, prospective employers, other educational institutions, funding bodies or recognised voluntary organisations". Second, "the student's permission ... should be acquired prior to making any verification request".

Anyhow, I found evidence online that X had been a graduate student at the university, but when I checked the Gazette I could find no mention of X having had an oral examination. The other source of evidence would be the University Library where there should be a copy of the thesis for all higher degrees. I couldn't find it in the catalogue. I suggested that Y might check further but they were already ahead of me, and had confirmed with the librarian that no thesis had been deposited in that name.

Now, I have no idea whether X is fraudulently claiming to have an Oxford doctorate, but I'm concerned that it is so hard for a private individual to validate someone's credentials. As far as I can tell, the justification comes from data protection regulations, which control what information organisations can hold about individuals. This is not an Oxford-specific interpretation of rules - I checked a few other UK universities, and the same processes apply.

Having said that, Y pointed out to me that there is a precedent for Oxford University to provide information when there is media interest in a high-profile case: in response to a freedom of information request, they confirmed that Ferdinand Marcus Jr did not have the degree he was claiming.

There will always be tension between openness and the individual's right to privacy, but the way the rules are interpreted mean that anyone could claim they had a degree from a UK university and it would be impossible to check this. Is there a solution? I'm no lawyer, but I would have thought it should be trivial to require that on receipt of a degree, the student is asked to give signed permission for their name, degree and date of degree to be recorded on a publicly searchable database. I can't see a downside to this, and going forward it would save a lot of administrative time dealing with verification requests.

Something like this does seem to work outside Europe. I only did a couple of spot checks, but found this for York University (Ontario):

"It is the University's policy to make information about the degrees or credentials conferred by the University and the dates of conferral routinely available. In order to protect our alumni information as much as possible, YU Verify will give users a result only if the search criteria entered matches a unique record. The service will not display a list of names which may match criteria and allow you to select."

And for Macquarie University, Australia, there is exactly the kind of searchable website that I'd assumed Oxford would have.

I'd be interested if anyone can think of unintended bad consequences of this approach. I had a bit of to-and-fro on Twitter about this with someone who argued that it was best to keep as much information as possible out of the public domain. I remain unconvinced: academic qualifications are important for providing someone with credentials as an expert, and if we make it easy for anyone to pretend to have a degree from a prestigious institution, I think the potential for harm is far greater than any harms caused by lack of privacy. Or have I missed something? 

 N.B. Comments on the blog are moderated so may only appear after a delay.


P.S. Some thoughts via Mastodon from Martin Vueilleme on potential drawback of directory: 

Far fetched, but I could see the following reasons:

- You live in an oppressive country that targets academics, intellectuals
- Hiding your university helps prevent stalkers (or other predators) from getting further information on you
- Hiding your university background to fit in a group
- Your thesis is on a sensitive topic or a topic forbidden from being studied where you live
- Hiding your university degree because you were technically not allowed to get it (eg women)

My (DB) response is that I think that in terms of balancing probabilities of risks against the risk of fraudsters benefiting from lack of checking, the case for the open directory is strengthened, as these risks seem very slight for UK universities (at least for now!). And the other cost/benefit analysis is of finances, where an open directory would seem superior; i.e. it costs to maintain the directory, but that has to be done anyhow, Currently there are extra costs for people who are employed to respond to requests for validation.

Thursday, 22 December 2016

Controversial statues: remove or revise?



The Rhodes Must Fall campaign in Oxford ignited an impassioned debate about the presence of monuments to historical figures in our Universities. On the one hand, there are those who find it offensive that a major university should continue to commemorate a person such as Cecil Rhodes, given the historical reappraisal of his role in colonialism and suppression of African people. On the other hand, there are those who worry that removal of the Rhodes statue could be the thin end of a wedge that could lead to demands for Nelson to be removed from Trafalgar Square or Henry VIII from King’s College Cambridge. There are competing petitions online to remove and retain the Rhodes statue: with both having similar numbers of supporters.

The Rhodes Must Fall campaign was back in the spotlight last week, when the Times Higher ran a lengthy article covering a range of controversial statues in Universities across the globe. A day before the article appeared, I had happened upon the Explorer's Monument in Fremantle, Australia. The original monument, dating to 1913, commemorated explorers who had been killed by 'treachorous natives' in 1864. As I read the plaque, I was thinking that this was one-sided, to put it mildly.

Source: https://en.wikipedia.org/wiki/Explorers%27_Monument
But then, reading on, I came to the next plaque, below the first, which was added to give the view of those who were offended by the original statue and plaque. 

Source: Source: https://en.wikipedia.org/wiki/Explorers%27_Monument
I like this solution.  It does not airbrush controversial figures and events out of history. Rather, it forces one to think about the ways in which a colonial perspective damaged many indigenous people - and perhaps to question other things that are just taken for granted. It also creates a lasting reminder of the issues currently under debate – whereas if a statue is removed, all could be forgotten in a few years’ time. 
Obviously, taken to extremes, this approach could get out of control – one can imagine a never-ending sequence of plaques like the comments section on a Guardian article. But used judiciously, this approach seems to me to be a good solution to this debate.

Friday, 28 November 2014

Metricophobia among academics

Most academics loathe metrics. I’ve seldom attracted so much criticism as for my suggestion that a citation-based metric might be used to allocate funding to university departments. This suggestion was recycled this week in the Times Higher Education, after a group of researchers published predictions of REF2014 results based on departmental H-indices for four subjects.

Twitter was appalled. Philip Moriarty, in a much-retweeted plea said: “Ugh. *Please* stop giving credence to simplistic metrics like the h-index. V. damaging”. David Colquhoun, with whom I agree on many things, responded like an exorcist confronted with the spawn of the devil, arguing that any use of metrics would just encourage universities to pressurise staff to increase their H-indices.

Now, as I’ve explained before, I don’t particularly like metrics. In fact, my latest proposal is to drop both REF and metrics and simply award funding on the basis of the number of research-active people in a department.  But I‘ve become intrigued by the loathing of metrics that is revealed whenever a metrics-based system is suggested, particularly since some of the arguments put forward do seem rather illogical.

Odd idea #1 is that doing a study relating metrics to funding outcomes is ‘giving credence’ to metrics. It’s not. What would give credence would be if the prediction of REF outcomes from H-index turned out to be very good. We already know that whereas it seems to give reasonable predictions for sciences, it’s much less accurate for humanities. It will be interesting to see how things turn out for the REF, but it’s an empirical question.

Odd idea #2 is that use of metrics will lead to gaming. Of course it will! Gaming will be a problem for any method of allocating money. The answer to gaming, though, is to be aware of how this might be achieved and to block obvious strategies, not to dismiss any system that could potentially be gamed. I suspect the H-index is less easy to game than many other metrics - though I’m aware of one remarkable case where a journal editor has garnered an impressive H-index from papers published in his own journals, with numerous citations to his own work. In general, though, those of us without editorial control are more likely to get a high H-index from publishing smaller amounts of high-quality science than churning out pot-boilers.

Odd idea #3 is the assumption that the REF’s system of peer review is preferable to a metric. At the HEFCE metrics meeting I attended last month, almost everyone was in favour of complex, qualitative methods of assessing research. David Colquhoun argued passionately that to evaluate research you need to read the publications. To disagree with that would be like slamming motherhood and apple pie. But, as Derek Sayer has pointed out, it is inevitable that the ‘peer review’ component of the REF will be flawed, given that panel members are required to evaluate several hundred submissions in a matter of weeks. The workload is immense and cannot involve the careful consideration of the content of books or journal articles, many of which will be outside the reader’s area of expertise.

My argument is a pragmatic one: we are currently engaged in a complex evaluation exercise that is enormously expensive in time and money, that has distorted incentives in academia, and that cannot be regarded as a ‘gold standard’. So, as an empirical scientist, my view is that we should be looking hard at other options, to see whether we might be able to achieve similar results in a more cost-effective way.

Different methods can be compared in terms of the final result, and also in terms of unintended consequences. For instance, in its current manifestation, the REF encourages universities to take on research staff shortly before the deadline – as satirised by Laurie Taylor (see Appointments section of this article). In contrast, if departments were rewarded for a high H-index, there would be no incentive for such behaviour. Also, staff members who were not principal investigators but who made valuable contributions to research would be appreciated, rather than threatened with redundancy.  Use of an H-index would also avoid the invidious process of selecting staff for inclusion in the REF.

I suspect, anyhow, we will find predictions from the H-index are less good for REF than for RAE. One difficulty for Mryglod et al that it is not clear whether the Units of Assessment they base their predictions on will correspond to those used in REF. Furthermore, in REF, a substantial proportion of the overall score comes from impact, evaluated on the basis of case studies. To quote from the REF2014 website: “Case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe.” My impression is that impact was included precisely to capture an aspect of academic quality that was orthogonal to traditional citation-based metrics, and so this should weaken any correlation of outcomes with H-index.

Be this as it may, I’m intrigued by people’s reactions to the H-index suggestion, and wondering whether this relates to the subject one works in. For those in arts and humanities, it is particularly self-evident that we cannot capture all the nuances of departmental quality from an H-index – and indeed, it is already clear that correlations between H-index and RAE outcomes are relatively low these disciplines. These academics work in fields where complex, qualitative analysis is essential. Interestingly, RAE outcomes in arts and humanities (as with other subjects) are pretty well predicted by departmental size, and it could be argued that this would be the most effective way of allocating funds.

Those who work in the hard sciences, on the other hand, take precision of measurement very seriously. Physicists, chemists and biologists, are often working with phenomena that can be measured precisely and unambiguously. Their dislike for an H-index might, therefore, stem from awareness of its inherent flaws: it varies with subject area and can be influenced by odd things, such as high citations arising from notoriety.

Psychologists, though, sit between these extremes. The phenomena we work with are complex. Many of us strive to treat them quantitatively, but we are used to dealing with measurements that are imperfect but ‘good enough’. To take an example from my own research. Years ago I wanted to measure the severity of children’s language problems, and I was using an elicitation task, where the child was shown pictures and asked to say what was happening. The test had a straightforward scoring system that gave indices of the maturity of the content and grammar of the responses. Various people, however, criticised this as too simple. I should take a spontaneous language sample, I was told, and do a full grammatical analysis. So, being young and impressionable I did. I ended up spending hours transcribing tape-recordings from largely silent children, and hours more mapping their utterances onto a complex grammatical chart. The outcome: I got virtually the same result from the two processes – one which took ten minutes and the other which took two days.

Psychologists evaluate their measures in terms of how reliable (repeatable) they are and how validly they do what they are supposed to do. My approach to the REF is the same as my approach to the rest of my work: try to work with measures that are detailed and complex enough to be valid for their intended purpose, but no more so. To work out whether a measure fits that bill, we need to do empirical studies comparing different approaches – not just rely on our gut reaction.

Wednesday, 18 June 2014

The University as big business:

The case of King's College London

 


© www.CartoonStock.com

King's College London is in the news for all the wrong reasons. In a document full of weasel words ('restructuring', 'consultation exercise'), staff in the schools of medicine and biomedical sciences, and the Institute of Psychiatry were informed last month that 120 of them were at risk of redundancy. The document was supposed to be confidential but was leaked to David Colquhoun who has posted a link to it on his blog.  This isn't the first time KCL has been in the news for its 'robust' management style. A mere four years ago, a similar though smaller purge was carried out at the Institute of Psychiatry, together with a major divestment in Humanities at KCL.

Any tale of redundancies on such a scale is a human tragedy, whether it be in a car factory or a University. But the two cases are not entirely parallel. For a car factory, the goal of the business is to make a profit. A sensible employer will try to maintain a cheerful and committed workforce, but ultimately they may be sacrificed if it proves possible to cut costs by, for instance, getting machines to do jobs that were previously done by people. The fact that a University is adopting that approach – sacking its academic staff to improve its bottom line – is an intellectual as well as a human tragedy. It shows how far we have moved towards the identification of universities with businesses.

Traditionally, a university was regarded as an institution whose primary function was the furtherance of learning and knowledge. Money was needed to maintain the infrastructure and pay the staff, but the money was a means to an end, not an end in itself. However, it seems that this quaint notion is now rejected in favour of a model of a university whose success is measured in terms of its income, not in terms of its intellectual capital.

The opening paragraph of the 'consultation document' is particularly telling: "King’s has built a reputation for excellence and has established itself as a world class university. Our success has been built on growing research volumes in key areas, improving research quality, developing our resources and offering quality teaching to attract the best students in an increasingly competitive environment." Note there is no mention of the academic staff of the institution. They are needed, of course, to "grow research volumes" (ugh!), just as factory workers are needed to manufacture cars. But they aren't apparently seen as a key feature of a successful academic institution. Note too the emphasis is on increasing the amount of research rather than research quality.

The most chilling feature of the document is the list of criteria that will be used to determine which staff are 'at risk'.  You are safe if you play a key role in teaching, or if you have grant income that exceeds a specified amount, dependent on your level of seniority.
What's wrong with this? Well, here are four points just for starters:

1. KCL management justifies its actions as key for "maintaining and improving our position as one of the world’s leading institutions". Sorry, I just don't get it. You don't improve your position by shedding staff, creating a culture of fear, and deterring research superstars from applying for positions in your institution in future.

2. The 'restructuring' treats individual scientists as islands. The Institute of Psychiatry has over the years built up a rich research community, where there are opportunities for people to bounce ideas off each other and bring complementary skills to tackling difficult problems. Making individuals redundant won't just remove an expense from the KCL balance sheet – it will also affect the colleagues of those who are sacked. 

3. As I've argued previously, the use of research income as a proxy measure of research excellence distorts and damages science. It provides incentives for researchers to get grants for the sake of it – the more numerous and more expensive the better. We end up with a situation where there is terrific waste because everyone has a massive backlog of unpublished work.
 
4. I suspect that part of the motivation behind the "restructuring" is in the hope that new buildings and infrastructure might reverse the poor showing of KCL in recent league tables of student satisfaction. If so, the move has backfired spectacularly. The student body at KCL has started a petition against the sackings, which has drawn attention to the issue worldwide.I urge readers to sign it.

Management at KCL just doesn't seem to get a very basic fact about running a university: Its academic staff are vital for the university's goal of achieving academic excellence. They need to be fostered, not bullied. One feels that if KCL were falling behind in a boat race, they'd respond by throwing out some of the rowers.

Saturday, 26 January 2013

An alternative to REF2014?


After blogging last week about use of journal impact factors in REF2014, many people have asked me what alternative I'd recommend. Clearly, we need a transparent, fair and cost-effective method for distributing funding to universities to support research. Those designing the REF have tried hard over the years to devise such a method, and have explored various alternatives, but the current system leaves much to be desired.

Consider the current criteria for rating research outputs, designed by someone with a true flair for ambiguity:
Rating Definition
4* Quality that is world-leading in terms of originality, significance and rigour
3* Quality that is internationally excellent in terms of originality, significance and rigour but which falls short of the highest standards of excellence
2* Quality that is recognised internationally in terms of originality, significance and rigour
1* Quality that is recognised nationally in terms of originality, significance and rigour

Since only 4* and 3* outputs will feature in the funding formula, then a great deal hinges on whether research is deemed “world-leading”, “internationally excellent” or “internationally recognised”. This is hardly transparent or objective. That’s one reason why many institutions want to translate these star ratings into journal impact factors. But substituting a discredited, objective criterion for a subjective criterion is not a solution.

The use of bibliometrics was considered but rejected in the past. My suggestion is that we should reconsider this idea, but in a new version. A few months ago, I blogged about how university rankings in the previous assessment exercise (RAE) related to grant income and citation rates for outputs. Instead of looking at citations for individual researchers, I used Web of Science to compute an H-index for the period 2000-2007 for each department, by using the ‘address’ field to search. As noted in my original post, I did this fairly hastily and the method can get problematic in cases where a Unit of Assessment does not correspond neatly to a single department. The H-index reflected all research outputs of everyone at that address – regardless of whether they were still at the institution or entered for the RAE. Despite these limitations, the resulting H-index predicted the RAE results remarkably well, as seen in the scatterplot below, which shows H-index in relation to the funding level following from RAE. This is computed by number of full-time staff equivalents multiplied by the formula:
    .1 x 2* + .3  x 3* + .7 x 4*
(N.B. I ignored subject weighting, so units are arbitrary).

Psychology (Unit of Assessment 44), RAE2008 outcome by H-index
Yes, you might say, but the prediction is less successful at the top end of the scale, and this could mean that the RAE panels incorporated factors that aren’t readily measured by such a crude score as H-index. Possibly true, but how do we know those factors are fair and objective? In this dataset, one variable that accounted for additional variance in outcome, over and above departmental H-index, was whether the department had a representative on the psychology panel: if they did, then the trend was for the department to have a higher ranking than that predicted from the H-index. With panel membership included in the regression, the correlation (r) increased significantly from .84 to .86, t = 2.82, p = .006. It makes sense that if you are a member of a panel, you will be much more clued up than other people about how the whole process works, and you can use this information to ensure your department’s submission is strategically optimal. I should stress that this was a small effect, and I did not see it in a handful of other disciplines that I looked at, so it could be a fluke. Nevertheless, with the best intentions in the world, the current system can’t ever defend completely against such biases.

So overall, my conclusion is that we might be better off using a bibliometric measure such as a departmental H-index to rank departments. It is crude and imperfect, and I suspect it would not work for all disciplines – especially those in the humanities. It relies solely on citations, and it's debatable whether that is desirable. But for sciences, it seems to be pretty much measuring whatever the RAE was measuring, and it would seem to be the lesser of various possible evils, with a number of advantages compared to the current system. It is transparent and objective, it would not require departments to decide who they do and don’t enter for the assessment, and most importantly, it wins hands down on cost-effectiveness. If we'd used this method instead of the RAE, a small team of analysts armed with Web of Science should be able to derive the necessary data in a couple of weeks to give outcomes that are virtually identical to those of the RAE.  The money saved both by HEFCE and individual universities could be ploughed back into research. Of course, people will attempt to manipulate whatever criterion is adopted, but this one might be less easily gamed than some others, especially if self-citations from the same institution are excluded.

It will be interesting to see how well this method predicts RAE outcomes in other subjects, and whether it can also predict results from the REF2014, where the newly-introduced “impact statement” is intended to incorporate a new dimension into assessment.

Sunday, 15 July 2012

The devaluation of low-cost psychological research

Psychology encompasses a wide range of subject areas, including social, clinical and developmental psychology, cognitive psychology and neuroscience. The costs of doing different types of psychology vary hugely. If you just want to see how people remember different types of material, for instance, or test children's understanding of numerosity, this can be done at very little cost. For most of the psychology I did as an undergraduate, data collection did not involve complex equipment, and data analysis was pretty straightforward - certainly well within the capabilities of a modern desktop computer. The main cost for a research proposal in this area would be for staff to do data collection and analysis. Neuroscience, however, is a different matter. Most kinds of brain imaging require not only expensive equipment, but also a building to house it and staff to maintain it, and all or part of these costs will be passed on to researchers. Furthermore, data analysis is usually highly technical and complex, and can take weeks, or even months, rather than hours. A project that involves neuroimaging will typically cost orders of magnitude more than other kinds of psychological research.
In academic research, money follows money. This is quite explicit in funding systems that reward an institution in proportion to their research income. This makes sense: an institution that is doing costly research needs funding to support the infrastructure for that research. The problem is that the money, rather than the research, can become the indicator of success. Hiring committees will scrutinise CVs for evidence of ability to bring in large grants. My guess is that, if choosing between one candidate with strong publications and modest grant income vs. another with less influential publications and large grant income, many would favour the latter. Universities, after all, have to survive in a tough financial climate, and so we are all exhorted to go after large grants to help shore up our institution's income. Some Universities have even taken to firing people who don't bring in the expected income. This means that cheap cost-effective research in traditional psychological areas will be devalued relative to more expensive neuroimaging.
I have no quarrel, in principle, with psychologists doing neuroimaging studies - some of my best friends are neuroimagers -  and it is important that if good science is to be done in this area that it should be properly funded. I am uneasy, though, about an unintended consequence of the enthusiasm for neuroimaging, which is that it has led to a devaluation of the other kinds of psychological research. I've been reading Thinking Fast and Slow, by Daniel Kahneman, a psychologist who has the rare distinction of being a Nobel Laureate. This is just one example of a psychologist who has made major advances without using brain scanners. I couldn't help thinking that Kahneman would not fare well in the current academic climate, because his experiments were simple, elegant ... and inexpensive.
I've suggested previously that systems of academic rewards need to be rejigged to take into account not just research income and publication outputs, but the relationship between the two. Of course, some kinds of research require big bucks, but large-scale grants are not always cost-effective. And on the other side of the coin, there are people who do excellent, influential work on a small budget.
I thought I'd see if it might be possible to get some hard data on how this works in practice. I used data for Psychology Departments from the last Research Assessment Exercise (RAE), from this website, and matched this up against citation counts for publications that came out in the same time period (2000-2007) from Web of Knowledge. The latter is a bit tricky, and I'm aware that figures may contain inaccuracies, as I had to search by address, using the name of the institution coupled with the words Psychology and UK. This will miss articles that don't have these words in the address. Also when double-checking the numbers, I  found that for a search by address, results can fluctuate from one occasion to the next. For these reasons, I'd urge readers to treat the results with caution, and I won't refer to institutions by name. Note too that though I restrict consideration to articles between 2000-2007, the citations extend beyond the period when the RAE was completed. Web of Knowledge helpfully gives you an H-index for the institution if you ask for a citation report, and this is what I report here, as it is more stable across repeated searches than the citation count. Figure 1 shows how research income for a department relates to its H-index, just for those institutions deemed research active, which I defined as having a research income of at least £500K over the reporting period. The overall RAE rating is colour-coded into bandings, and the symbol denotes whether or not the departmental submission mentions neuroimaging as an important part of its work.
Data from RAE and Web of Knowledge: treat with caution!
Several features are seen in these data, and most are unsurprising:
  • Research income and H-index are positively correlated, r = .74 (95%CI .59-.84) as we would expect. Both variables are correlated with the number of staff entered in the RAE, but the correlation between them remains healthy when this factor is partialled out, r = .61 (95%CI .40-.76).
  • Institutions coded as doing neuroimaging have bigger grants: after taking into account differences in number of staff, the mean income for departments with neuroimaging was £7,428K and for those without it was £3,889K (difference significant at p = .01).
  • Both research income and H-index are predictive of RAE rankings: the correlations are .68 (95% CI .50-.80) for research income and .79 (95% CI .66-.87) for H-index, and together they account for 80% of the variance in rankings. We would not expect perfect prediction, given that the RAE committee went beyond metrics to assess aspects of research quality not reflected in citations or income. And in addition, it must be noted that the citations counted here are for all researchers at a departmental address, not just those entered in the RAE.
A point of concern to me in these data, though, is the wide spread in H-index seen for those institutions with the highest levels of grant income. If these numbers are accurate, some departments are using their substantial income to do influential work, while others seem to achieve no more than other departments with much less funding. There may be reasonable explanations for this - for instance, a large tranche of funding may have been awarded in the RAE period but not had time to percolate through to publications. But nevertheless, it adds to my concern that we may be rewarding those who chase big grants without paying sufficient attention to what they do with the funding when they get it.
What, if anything, should we do about this? I've toyed in the past with the idea of a cost-efficiency metric (e.g. citations divided by grant income), but this would not work as a basis for allocating funds, because some types of research are intrinsically more expensive than others. In addition, it is difficult to get research funding, and success in this arena is in itself an indicator that the researchers have impressed a tough committee of their peers. So, yes, it makes sense to treat level of research funding as one indicator of an institution's research excellence when rating departments to determine who gets funding. My argument is simply that we should be aware of the unintended consequences if we rely too heavily on this metric. It would be nice to see some kind of indicator of cost-effectiveness included in ratings of departments alongside the more traditional metrics. In times of financial stringency, it is particularly short-sighted to discount the contribution of researchers who are able to do influential work with relatively scant resources.