Wednesday, 1 January 2025
Retrospective look at blog highlights of 2024: What happened next?
Monday, 23 December 2024
Finland vs. Germany: the case of MDPI
It's been an odd week for the academic publisher MDPI. On 16th December, Finland's Publication Forum (known as JUFO) announced that from January 2025 it was downgrading its classification of 271 open access journals to the lowest level, zero. By my calculations, this includes 187 journals from MDPI and 82 from Frontiers, plus 2 others. This is likely to deter Finnish researchers from submitting work to these journals, as the rating indicates they are of low quality. As explained in an article in the Times Higher Education, JUFO justified its decision on the grounds that these journals “aim to increase the number of publications with the minimum time spend for editorial work and quality assessment”.
Then the very next day, 17th December, MDPI announced that it had secured a national publishing agreement with ZB Med, which offered substantial discounts to authors from over 100 German Universities publishing in MDPI journals. On Bluesky, this news was not greeted with the universal joy that ZB Med might have anticipated, with comments such as "an embarrassment", "shocking", and "a catastrophe".
To understand the background, it's worth reading a thread on Bluesky by Mark Hanson, one of the authors of a landmark paper entitled "The Strain on Scientific Publishing". This article showed that MDPI and Frontiers stand out from other publishers in terms of having an exponential growth in number of papers published in recent years, a massive shift to special issues as a vehicle for this increase, and a sharp drop in publication lag from 2016 to 2022.In their annual report for 2023, MDPI described annualised publication growth of 20.4%. They stated that they have over 295,693 academic editors on editorial boards, and a median lag of 17 days from receipt of a paper to the first editorial decision and 6 weeks from submission to publication. It's not clear whether the 17 day figure includes desk rejections, but even if it does, this is remarkably fast. Of course, you could argue (and I'm sure the publishers will argue) that if you are publishing a lot more and doing it faster, you are just being efficient. However, an alternative view, and one that is apparently held by JUFO, is that this speedy processing goes hand in hand with poor editorial quality control.
The problem here is that anyone who has worked as a journal editor knows that, while a 17 day turnaround might be a good goal to aim for, it is generally not feasible. There is a limited pool of experts who can do a thorough peer review, and often one has to ask as many as 20 people to do a review in order to achieve 2 or 3 reviews. So it can take at least a couple of weeks to secure reviewers, and then it is likely to be another couple of weeks before all reviewers have completed a comprehensive review. Given these constraints, most editors would be happy if they could achieve a median time to first decision of 34 days - i.e. double that reported by MDPI. So the sheer speed of decision making - regarded by MDPI as a selling point for authors - is a red flag.
It seems that speed is achieved by adopting a rather unorthodox process of assigning peer reviewers, where the involvement of an academic editor is optional: "At least two review reports are collected for each submitted article. The academic editor can suggest reviewers during pre-check. Alternatively, MDPI editorial staff will use qualified Editorial Board members, qualified reviewers from our database, or new reviewers identified by web searches for related articles." The impression is that, in order to meet targets, editorial staff will select peer reviewers who can be relied upon to respond immediately to requests to review.
A guest post on this blog by René Aquarius supported these suspicions and further suggested that reviewers who are critical may be sidelined. After writing an initial negative review, René had promptly agreed to review a revision of a paper, but was then told his review was not needed - this is quite remarkable, given that most editors would be delighted if a competent reviewer agreed to do a timely re-review. It's worth looking not just at René's blogpost but also the comments from others, which indicate his experience is not unique.
A further issue concerns the fate of papers receiving negative reviews. René found that the paper he had rejected popped up as a new submission in another MDPI journal, and after negative reviews there, it was published in a third journal. This raises questions about MDPI's reported rejection rate of around 50%. If each of these resubmissions was counted as a new submission, the rejection rate would appear to be 66%, but given that the same paper was recycled from journal to journal before eventual acceptance, the actual rate was 0%.
One good thing about MDPI is that it gives authors the option of making peer review open. However, analysis of published peer reviews further dents confidence in the rigour of the peer review process. A commentary in Science covering the work of Maria Ángeles Oviedo García noted how some MDPI peer reviews contained repetitive phrases that suggested they were generated by a template. They were superficial and did not engage seriously with the content of the article. In some cases the reviewer freely admitted a lack of expertise in the topic of the article, and in others, there was evidence of coercive citation (i.e., authors being told to cite the work of a specific individual, sometimes the editor).
Comments on Reddit cannot, of course, be treated as hard evidence, but they raise further questions about the peer review process at MDPI. Several commenters described having recommended rejection as a peer reviewer only to find that the paper was published without their comments. If negative peer review comments are selectively suppressed in the public record, this would be a serious breach of ethical standards by the publisher.
Lack of competent peer review and/or editorial malpractice is also suggested by the publication of papers in MDPI that fall well below the threshold of acceptable science. Of course, quality judgements are subjective, and it's common for researchers to complain "How did this get past peer review?" But in the case of MDPI journals, one finds articles that are so poor that they suggest the editor was either corrupt or asleep on the job. I have covered some examples in previous blogposts, here, and here.
The Finns are not alone in being concerned about research quality in MDPI journals. The Swiss National Science Foundation did not mention specific publishers by name, but in November 2023 they withdrew funding for articles published in special issues. Since 2023, the Directory of Open Access Journals has withdrawn 19 MDPI journals from its index for "Not adhering to best practice". Details are not provided, but these tend to be journals where guest editors have used special issues to publish a high volume of articles by themselves and close collaborators - another red flag for scientific quality. Yet another issue is citation manipulation, where editors or peer reviewers demand inclusion of specific references in a revision of an article in order to boost their own citation count. In February 2024, China released its latest Early Warning List of journals that are deemed to be "untrustworthy, predatory or not serving the Chinese research community’s interests". This included four MDPI journals listed for citation manipulation.
A final red flag about MDPI is that it seems remarkably averse to retracting articles. Hindawi publishing, which was bought by Wiley in 2021, was heavily criticised for allowing a flood of paper-milled articles to be published, but it did subsequently retract over 13,000 of them (just under 5% of 267K articles), before the closure of the brand. A search on Web of Science for documents classified as "retracted publication" or "retraction" and published by "MDPI or MDPI Ag" turned up a mere 582 retractions since 2010, which amounts to 0.04% of the 1.4 million articles listed on the database.
I've heard various arguments against JUFO's action, such as: many papers published in MDPI journals are fine; you should judge an article by its content, not where it is published; authors should be free to prefer speed over scrutiny if they wish. The reason why I support JUFO, and think ZB Med is rash to sign an agreement, is because if the peer review process is not managed by experienced and respected academic editors with specialist subject knowledge, then we need to consider the impact, not just on individual authors, but on the scientific literature as a whole. If we allow trivia, fatally flawed studies or pseudoscience to be represented as "peer-reviewed" this contaminates the research literature, with adverse consequences for everyone.
Monday, 25 November 2024
Why I have resigned from the Royal Society
2.6. Fellows and Foreign Members shall carry out their scientific research with regard to the Society's statement on research integrity and to the highest standards.2.10. Fellows and Foreign Members shall treat all individuals in the scientific enterprise collegially and with courtesy, including supervisors, colleagues, other Society Fellows and Foreign Members, Society staff, students and other early‐career colleagues, technical and clerical staff, and interested members of the public.2.11. Fellows and Foreign Members shall not engage in any form of discrimination, harassment, or bullying.
'I think what concerns people is that Neuralink could be cutting corners, and so far nobody has stopped them,' says Nick Ramsey, a clinical neuroscientist at University Medical Center Utrecht, in the Netherlands. There’s an incredible push by Neuralink to bypass the conventional research world, and there’s little interaction with academics, as if they think that we’re on the wrong track—that we’re inching forward while they want to leap years forward.
In response to Musk's claim that no monkey had died because of Neuralink, the Physicians Committee for Responsible Medicine wrote to the SEC, claiming Musk’s comments were false. The group said it had obtained veterinary records from Neuralink’s experiments showing that at least 12 young, healthy monkeys were euthanized as a result of problems with Neuralink’s implant. The group alleged that Musk’s comments are misleading investors, and urged SEC regulators to investigate Musk and Neuralink for securities fraud.
Fellows and Foreign Members shall not act or fail to act in any way which would undermine the Society's mission or bring the Society into disrepute.
The Royal Society’s mission since it was established in 1660 has been to promote science for the benefit of humanity, and a major strand of that is to communicate accurately. But false information is interfering with that goal. It is accused of fuelling mistrust in vaccines, confusing discussions about tackling the climate crisis and influencing the debate about genetically modified crops.
Musk’s reason for buying Twitter was to influence the social discourse. And influence he did—by using his enormous platform (203 million followers) to endorse Trump, spread disinformation about voter fraud and deep fakes of Kamala Harris, and amplify conspiracy theories about everything from vaccines to race replacement theory to misogyny.
- Dr Fauci's monkey business on NIH's "monkey island": $33,200,000
- NIH's meth-head monkeys: portion of $12,000,000
- Dr Fauci's transgender monkey study: $477,121
Sunday, 27 October 2024
I don't care about journal impact factors but I do care about visibility
There's been a fair bit of discussion about Clarivate's decision to pause inclusion of eLife publications on the Science Citation Index (e.g. on Research Professional). What I find exasperating is that most of the discussion focuses on a single consequence - loss of eLife's impact factor. For authors, there are graver consequences.
I've reviewed for eLife but never published there; however, I have published a lot in Wellcome Open Research, which is another journal that aimed to disrupt the traditional publishing model, and has some similarities with eLife. Wellcome Open Research has never been included in Science Citation Index, despite the fact that it uses peer review. Wellcome Open Research has an unconventional model whereby submitted papers are immediately published, as a kind of pre-print prior to peer review, and then updated after peer review. It is true that some papers don't get sufficient approval to proceed to the peer-reviewed stage; the distinction between those that do and do not pass peer review is clearly flagged on the article. In addition to peer review, Wellcome Open Research maintains some quality control by limiting eligibility to researchers funded by Wellcome.
When Wellcome Open Research started up, all Wellcome-funded researchers were encouraged to publish there. As someone committed to Open Research, this seemed a great idea. There were no publication charges, and everything was open: access to the publication, data, and peer review. Peer reviewers even get DOIs for their reviews, some of which are worth citing in their own right. I was increasingly adopting open practices, and I think some of my best peer-reviewed work is published there.
I was shocked when I discovered that the journal wasn't included in Web of Science. I remember preparing a progress report for Wellcome and using Web of Science to check I hadn't omitted any publications. I was puzzled that I seemed to have published far less than I remembered. Then it became clear: everything in Wellcome Open Research was missing.
I was on the Advisory Board for Wellcome Open Research at the time, and raised this with them. They were shocked that I was upset. "We thought you of all people didn't care about impact factors", they said. This, of course, was true. But I did care a lot about my work being visible. I was also aware that any WOS-based H-index would exclude all the papers listed below: not a big deal for me, but potentially harmful to junior authors.
The reply I got was similar to the argument being made by eLife - well, the articles are indexed in Google Scholar and PubMed. That was really little consolation to me, given that I had relied heavily on Web of Science in my own literature searches, believing that it screened out dodgy journals. (This belief turns out to be false - there are many journals featured in WoS that are very low quality, which just rubs salt into the wound).
I have some criticisms of eLife's publishing model, but I would like them to succeed. We urgently need alternatives to the traditional journal model operated by the big commercial publishers. Their response to the open access movement has been to monetise it, with catastrophic consequences for science, as an unlimited supply of shoddy and fake work gets published - often in journals that are indexed in Web of Science.
I agree that we need an index of published academic work that has some quality control. Whether alternatives like OpenAlex will do the job remains to be seen.
Papers that aren't indexed on Web of Science
Bishop, D. V. M., & Bates, T. C. (2020). Heritability of language laterality assessed by functional transcranial Doppler ultrasound: A twin study. Wellcome Open Research, 4, 161. https://doi.org/10.12688/wellcomeopenres.15524.3
Bishop, D. V. M., Brookman-Byrne, A., Gratton, N., Gray, E., Holt, G., Morgan, L., Morris, S., Paine, E., Thornton, H., & Thompson, P. A. (2019). Language phenotypes in children with sex chromosome trisomies. Wellcome Open Research, 3, 143. https://doi.org/10.12688/wellcomeopenres.14904.2
Bishop, D. V. M., Grabitz, C. R., Harte, S. C., Watkins, K. E., Sasaki, M., Gutierrez-Sigut, E., MacSweeney, M., Woodhead, Z. V. J., & Payne, H. (2021). Cerebral lateralisation of first and second languages in bilinguals assessed using functional transcranial Doppler ultrasound. Wellcome Open Research, 1, 15. https://doi.org/10.12688/wellcomeopenres.9869.2
Frizelle, P., Thompson, P. A., Duta, M., & Bishop, D. V. M. (2019). The understanding of complex syntax in children with Down syndrome. Wellcome Open Research, 3, 140. https://doi.org/10.12688/wellcomeopenres.14861.2
Newbury, D. F., Simpson, N. H., Thompson, P. A., & Bishop, D. V. M. (2018). Stage 1 Registered Report: Variation in neurodevelopmental outcomes in children with sex chromosome trisomies: protocol for a test of the double hit hypothesis. Wellcome Open Research, 3, 10. https://doi.org/10.12688/wellcomeopenres.13828.2
Newbury, D. F., Simpson, N. H., Thompson, P. A., & Bishop, D. V. M. (2021). Stage 2 Registered Report: Variation in neurodevelopmental outcomes in children with sex chromosome trisomies: testing the double hit hypothesis. Wellcome Open Research, 3, 85.
https://doi.org/10.12688/wellcomeopenres.14677.4
Pritchard, V. E., Malone, S. A., Burgoyne, K., Heron-Delaney, M., Bishop, D. V. M., & Hulme, C. (2019). Stage 2 Registered Report: There is no appreciable relationship between strength of hand preference and language ability in 6- to 7-year-old children. Wellcome Open Research, 4, 81. https://doi.org/10.12688/wellcomeopenres.15254.1
Thompson, P. A., Bishop, D. V. M., Eising, E., Fisher, S. E., & Newbury, D. F. (2020). Generalized Structured Component Analysis in candidate gene association studies: Applications and limitations. Wellcome Open Research, 4, 142. https://doi.org/10.12688/wellcomeopenres.15396.2
Wilson, A. C., & Bishop, D. V. M. (2019). ‘If you catch my drift...’: Ability to infer implied meaning is distinct from vocabulary and grammar skills. Wellcome Open Research, 4, 68. https://doi.org/10.12688/wellcomeopenres.15210.3
Wilson, A. C., King, J., & Bishop, D. V. M. (2019). Autism and social anxiety in children with sex chromosome trisomies: An observational study. Wellcome Open Research, 4, 32. https://doi.org/10.12688/wellcomeopenres.15095.2
Woodhead, Z. V. J., Rutherford, H. A., & Bishop, D. V. M. (2020). Measurement of language laterality using functional transcranial Doppler ultrasound: A comparison of different tasks. Wellcome Open Research, 3, 104. https://doi.org/10.12688/wellcomeopenres.14720.3
Monday, 21 October 2024
What is going on at the Journal of Psycholinguistic Research?
Last week this blog focussed on problems affecting Scientific Reports, a mega-journal published by Springer Nature. This week I look at a journal at the opposite end of the spectrum, the Journal of Psycholinguistic Research (JPR), a small, specialist journal which has published just 2187 papers since it was founded in 1971. This is fewer than Scientific Reports publishes in one year. It was brought to my attention by Anna Abalkina because it shows every sign of having been targeted by one or more Eastern European paper mills.
Now, this was really surprising to me. JPR was founded in 1971 by Robert Rieber, whose obituaries in the New York Times and the American Psychologist confirm he had a distinguished career (though both misnamed JPR!). The Advisory and Editorial boards of the journal are peppered with names of famous linguists and psychologists, starting with Noam Chomsky. So there is a sense that if this can happen to JPR, no journal is safe.
Coincidentally, last week Anna and I submitted revisions for a commentary on paper mills coauthored with Pawel Matusz. (You can read the preprint here). Pawel is editor of the journal Mind, Brain & Education (MBE), which experienced an attack by the Tanu.pro paper mill involving papers published in 2022-3. In the commentary, we discussed characteristics of the paper mill, which are rather distinctive and quite different from what is seen in basic biomedical or physical sciences. A striking feature is that the IMRaD structure (Introduction, Methods, Results and Discussion) is used, but in a clueless fashion, with these headings being inserted in what is otherwise a rambling and discursive piece of text, that typically has little or no empirical content. Insofar as there are any methods described, they don't occur in the methods section, and they are too vague for the research to be replicable.
Reading these papers rapidly turns my brain to mush, but in the interest of public service I did wade through five of them and left comments on Pubpeer:Yeleussizkyzy, M., Zhiyenbayeva, N., Ushatikova, I. et al. E-Learning and Flipped Classroom in Inclusive Education: The Case of Students with the Psychopathology of Language and Cognition. J Psycholinguist Res 52, 2721–2742 (2023). https://doi.org/10.1007/s10936-023-10015-ySnezhko, Z., Yersultanova, G., Spichak, V. et al. Effects of Bilingualism on Students’ Linguistic Education: Specifics of Teaching Phonetics and Lexicology. J Psycholinguist Res 52, 2693–2720 (2023). https://doi.org/10.1007/s10936-023-10016-x
Nurakenova, A., Nagymzhanova, K. A Study of Psychological Features Related to Creative Thinking Styles of University Students. J Psycholinguist Res 53, 1 (2024). https://doi.org/10.1007/s10936-024-10042-3
Auganbayeva, M., Turguntayeva, G., Anafinova, M. et al.Linguacultural and Cognitive Peculiarities of Linguistic Universals. J Psycholinguist Res 53, 3 (2024). https://doi.org/10.1007/s10936-024-10050-3
Shalkarbek, A., Kalybayeva, K., Shaharman, G. et al. Cognitive Linguistic Analysis of Hyperbole-based Phraseological Expressions in Kazakh and English Languages. J Psycholinguist Res 53, 4 (2024). https://doi.org/10.1007/s10936-024-10052-1
My experience with the current batch of papers suggests that a relatively quick way of screening a submitted paper would be to look at the Methods section. This should contain an account of methods that would indicate what was done and how, at a level of detail sufficient for others to replicate the work. Obviously, this is not appropriate for theoretical papers, but for those purporting to report empirical work, it would work well, at least for the papers I looked at in JPR.
All of these papers have authors from Kazakhstan, sometimes with co-authors from the Russian Federation. This led me to look at the geographic distribution of authors in the journal over time. The top countries represented by JPR authors in 2020 onwards are China (113), United States (68), Iran (52), Germany (28), Saudi Arabia (22) and Kazakhstan (19). However, these composite numbers mask striking trends. All the Kazakhstan authored papers are in 2023-2024. There's also a notable fall-off in papers authored by USA-based authors in the same time period, with only 11 cases in total. This is quite remarkable for a journal that had a striking USA dominance in authors up until around 2015, as shown in the attached figure (screenshot from Dimensions.ai).
Number of papers in JPR from five top countries: 2005-2024 |
Exported: October 20, 2024 Criteria: Source Title is Journal of Psycholinguistic Research. © 2024 Digital Science and Research Solutions Inc. All rights reserved. Non-commercial redistribution / external re-use of this work is permitted subject to appropriate acknowledgement. This work is sourced from Dimensions® at www.dimensions.ai. |
Whenever a paper mill infestation is discovered, it raises the question of how it happened. Surely the whole purpose of peer review is to prevent low quality or fraudulent material entering the literature? In other journals where this has happened it has been found that the peer review process was compromised, with fake peer reviewers being used. Even so, one would have hoped that an editor would scrutinize papers and realise something was amiss. As mentioned in the previous blogpost, it would be much easier to track down the ways in which fraudulent papers get into mainstream journals if the journal reported information about the editor who handled the paper, and published open peer review.
Whatever the explanation, it is saddening to see a fine journal brought so low. In 2021, at the 50th anniversary of the founding of the journal, the current editor, Rafael Art. Javier, wrote a tribute to his predecessor, Robert Rieber:"His expectation, as stated in that first issue, was that manuscripts accepted 'must add to knowledge in some way, whether they are in the form of experimental reports, review papers, or theoretical papers...and studies with negative results,' provided that they are of sufficiently high quality to make an original contribution."
Let us hope that the scourge of paper mills can be banished from the journal to allow it to be restored to the status it once had, and for Robert Rieber's words to once more be applicable.
Saturday, 19 October 2024
Bishopblog catalogue (updated 19 October 2024)
Source: http://www.weblogcartoons.com/2008/11/23/ideas/ |
Accentuate the negative (26 Oct 2011) Novelty, interest and replicability (19 Jan 2012) High-impact journals: where newsworthiness trumps methodology (10 Mar 2013) Who's afraid of open data? (15 Nov 2015) Blogging as post-publication peer review (21 Mar 2013) Research fraud: More scrutiny by administrators is not the answer (17 Jun 2013) Pressures against cumulative research (9 Jan 2014) Why does so much research go unpublished? (12 Jan 2014) Replication and reputation: Whose career matters? (29 Aug 2014) Open code: note just data and publications (6 Dec 2015) Why researchers need to understand poker ( 26 Jan 2016) Reproducibility crisis in psychology ( 5 Mar 2016) Further benefit of registered reports ( 22 Mar 2016) Would paying by results improve reproducibility? ( 7 May 2016) Serendipitous findings in psychology ( 29 May 2016) Thoughts on the Statcheck project ( 3 Sep 2016) When is a replication not a replication? (16 Dec 2016) Reproducible practices are the future for early career researchers (1 May 2017) Which neuroimaging measures are useful for individual differences research? (28 May 2017) Prospecting for kryptonite: the value of null results (17 Jun 2017) Pre-registration or replication: the need for new standards in neurogenetic studies (1 Oct 2017) Citing the research literature: the distorting lens of memory (17 Oct 2017) Reproducibility and phonics: necessary but not sufficient (27 Nov 2017) Improving reproducibility: the future is with the young (9 Feb 2018) Sowing seeds of doubt: how Gilbert et al's critique of the reproducibility project has played out (27 May 2018) Preprint publication as karaoke ( 26 Jun 2018) Standing on the shoulders of giants, or slithering around on jellyfish: Why reviews need to be systematic ( 20 Jul 2018) Matlab vs open source: costs and benefits to scientists and society ( 20 Aug 2018) Responding to the replication crisis: reflections on Metascience 2019 (15 Sep 2019) Manipulated images: hiding in plain sight (13 May 2020) Frogs or termites: gunshot or cumulative science? ( 6 Jun 2020) Open data: We know what's needed - now let's make it happen (27 Mar 2021) A proposal for data-sharing the discourages p-hacking (29 Jun 2022) Can systematic reviews help clean up science (9 Aug 2022)Polyunsaturated fatty acids and children's cognition: p-hacking and the canonisation of false facts (4 Sep 2023)
Statistics
TEF in the time of pandemic (27 Jul 2020) University staff cuts under the cover of a pandemic: the cases of Liverpool and Leicester (3 Mar 2021) Some quick thoughts on academic boycotts of Russia (6 Mar 2022) When there are no consequences for misconduct (16 Dec 2022) Open letter to CNRS (30 Mar 2023) When privacy rules protect fraudsters (Oct 12, 2023) Defence against the dark arts: a proposal for a new MSc course (Nov 19, 2023) An (intellectually?) enriching opportunity for affiliation (Feb 2 2024) Just make it stop! When will we say that further research isn't needed? (Mar 24 2024) Are commitments to open data policies worth the paper they are written on? (May 26 2024) Whistleblowing, research misconduct, and mental health (Jul 1 2024)
Humour and miscellaneous Orwellian prize for scientific misrepresentation (1 Jun 2010) An exciting day in the life of a scientist (24 Jun 2010) Science journal editors: a taxonomy (28 Sep 2010) Parasites, pangolins and peer review (26 Nov 2010) A day working from home (23 Dec 2010) The one hour lecture (11 Mar 2011) The expansion of research regulators (20 Mar 2011) Scientific communication: the Comment option (25 May 2011) How to survive in psychological research (13 Jul 2011) Your Twitter Profile: The Importance of Not Being Earnest (19 Nov 2011) 2011 Orwellian Prize for Journalistic Misrepresentation (29 Jan 2012) The ultimate email auto-response (12 Apr 2012) Well, this should be easy…. (21 May 2012) The bewildering bathroom challenge (19 Jul 2012) Are Starbucks hiding their profits on the planet Vulcan? (15 Nov 2012) Forget the Tower of Hanoi (11 Apr 2013) How do you communicate with a communications company? ( 30 Mar 2014) Noah: A film review from 32,000 ft (28 July 2014) The rationalist spa (11 Sep 2015) Talking about tax: weasel words ( 19 Apr 2016) Controversial statues: remove or revise? (22 Dec 2016) The alt-right guide to fielding conference questions (18 Feb 2017) My most popular posts of 2016 (2 Jan 2017) An index of neighbourhood advantage from English postcode data ( 15 Sep 2018) Working memories: A brief review of Alan Baddeley's memoir ( 13 Oct 2018) New Year's Eve Quiz: Dodgy journals special (31 Dec 2022)