Ramblings on academic-related matters. For information on my research see https://www.psy.ox.ac.uk/research/oxford-study-of-children-s-communication-impairments. Twin analysis blog: http://dbtemp.blogspot.com/ . ERP time-frequency analysis blog: bishoptechbits.blogspot.com/ . For tweets, follow @deevybee.
Wednesday, 30 June 2010
Book Review: The Invisible Gorilla
Chabris, C., & Simons, D. (2010). The invisible gorilla and other ways our intuition deceives us. London: HarperCollins.
Psychology is a much misunderstood discipline. If you go into a high street bookstore, you will find the psychology section stuffed with self-help manuals and in all probability located next to the section on witchcraft and the occult. This is partly the fault of the Dewey Decimal Classification, which sandwiches Psychology firmly between Philosophy and Religion. For those who regard experimental psychology as a scientific discipline with affinities to medicine and biology this is a problem, and some psychology departments have dissociated themselves from the fluffy fringes of the discipline by renaming themselves as departments of cognitive science or behavioural neuroscience. An alternative strategy is to reclaim the term psychology to refer to a serious scientific discipline by demonstrating how experimentation can illuminate mental processes and come up with both surprising and useful results. This book does just that, and it does so in an engaging and accessible style.
The book starts out with the phenomenon referred to in the title, and which the authors are best-known for, i.e. the Invisible Gorilla experiment. This has become well-known but I won't describe it in case the reader has not experienced the phenomenon. Richard Wiseman has a nice video demonstrating it. This is perhaps the most striking example of how we can deceive ourselves and be over-confident in our judgement of what we see, remember or know. In all there are six chapters, each dealing with a different 'everyday illusion' to which we are susceptible.
My personal favourites were the last two chapters, which consider why people continue to believe in notions such as the damaging effect of MMR vaccination, or the beneficial effects of brain training for the elderly. Sceptics tend to dismiss those who persist such beliefs in the face of negative evidence, and denigrate them as stupid and scientifically illiterate. Chabris and Simons, however, are interested in why scientific evidence is so often rejected and consider why it is that anecdotes so much more powerful than data, and why we are sucked in to assuming there is causation when only correlation has been demonstrated. My one disappointment was that they did not say more about the reasons for wide individual variation in people's scepticism. After a rigorously sceptical undergraduate course in experimental psychology at Oxford, I assumed that all my peers on the course would be sceptics through-and-through, but that is far from being the case: I have intelligent friends who learned all about the scientific method, just as I did, yet who now are adherents of alternative therapies or psychoanalysis. I find this deeply puzzling, but it makes me realise that the satisfaction I find in the scientific method is in part due to the fact that it resonates with the way my brain works, and there are others for whom this is not so.
In sum, I enjoyed this book for the insights it gave into how people think and reason, and for its emphasis on the need to adopt scepticism as a mind-set. Its avoidance of jargon and clear explanations give it broad appeal, and it would make an ideal text for undergraduates entering the field of experimental psychology, because it illustrates how a good experimenter thinks about evidence and designs studies to test hypotheses.
What is a mental process? The stuff we're conscious of and can't quite put into data, or a limbo between real, wet, neural processes, and observable behaviour - sort of a structured version of the latent variables used by psychometricians (who often seem allergic to things cognitive and processy).
ReplyDeleteIt used to be considered bad form to refer to something as a neural process unless it referred to synapses, but is this still the case? There are various levels of "neural" from absence of neural due to lesions and BOLD activation patterns, down to vesicle kissing and gene expression. Maybe behavioural neuroscience is allowed up another level to more abstract representations currently called "mental", "cognitive", and the mental can be returned to refer to the what-it-feels-like magic.
In reply to Andy:
ReplyDeleteMental processes: definitely not neuro. Things like memory, perception, reasoning, comprehension, etc. You can call them 'cognitive' if you prefer, though 'mental' has a respectable tradition going back to William James.
I distinguish in my work between 4 levels of description, namely etiology (genes/environments), neurology, cognition and observed behaviour. Obviously all link to the other, and much of the research interest is in making the links. But this book illustrates just how the mental/cognitive level can be a valid topic of experimental study in its own right: I'd argue not a limbo, just another level of description.
Many thanks for the thoughtful reply. Okay, let's see where this goes.
ReplyDeleteThe best-known analogy is the computer. The hardware stuff you can kick is analogous to the brain; the stuff you see on the screen is, I suppose, the phenomenology; and then the software, all of which correlates with stuff you could detect in the hardware if you looked hard enough, some but not all of which affects the screen, is cognition.
Is that the idea?
From the engineering perspective, the point of the levels is clear. When you want, say, to get your computer to take draws from a Gaussian distribution, you probably don't want to be fiddling around directly with the physical memory chips in your PC. It's much less painful to rely on years of abstraction and just type a command in your favourite stats package. You intevene, via the keyboard, at the level of software, and care very little about what the hardware is doing.
What is the point of the levels for understanding a system?
We want to tell a story about people-level phenomena, like remembering things, reasoning about things, interpreting language, expressing emotions. Layers of abstraction are necessary to isolate the important points of this story. The effect of phonological similarity on memory processes would be lost if expressed in terms of neurotransmitters. Pragmatic language effects in reasoning tasks would be difficult to grasp if expressed in terms of gene expression.
Now what I don't understand is when the neural becomes the cognitive. There already are many levels of neural, not all of which you can poke. Thinking here about the sorts of things you can do with EEG and MEG where the story is tremendously abstract, though dependent on stuff measurable from the brain.
Maybe a clue comes from how you intervene on the system. Here, again, I'm not sure how the cognitive-level helps. You can intervene with TMS (or ECT, I suppose), you can intervene with drugs, or you can intervene with verbal instructions and other stimuli. You can also intervene socially and culturally.
How do you intervene cognitively or mentally?
Maybe this is the wrong way to think about it, but it bothers me a lot!
I do accept that very abstract theories of neural representations are necessary. But I think they are still theories of something neural, even if they don't mention bits of brain.
Finally, what is cognition? Here I collected some quotations which show how messy the concept is:
http://figuraleffect.wordpress.com/2008/06/02/what-is-cognition/