Watch CBS News

Controversy Over Brain Scan Use in Courtrooms

A murder suspect sits in a quiet room with electrodes placed on her head. The prosecution reads out its narrative of the crime and the suspect's alleged role in it. As she listens, the machines record her brain activity and reveal that she experienced aspects of the crime that only the murderer could have. Teased out by technology, her own memories have betrayed her. The verdict is guilty.

This scenario might seem far-fetched, but it's actually what happened in an Indian trial that took place in 2008. The judge "explicitly cited a scan as proof that the suspect's brain held "experiential knowledge" about the crime that only the killer could possess, sentencing her to life in prison." There has been a smattering of attempts to use of brain-scanning technology in this way, accompanied by an uproar about the technology's readiness.

But a new study by Jesse Rissman from Stanford University suggests that these promises are overplayed. Together with Henry Greely and Anthony Wagner, he has shown that brain scans can accurately decode whether people think they remember something, but not whether they actually remember something. And that gap between subjective and objective memory is a vast chasm as far as the legal system is concerned.

Our memories are stored within networks of neurons so it's reasonable to think that by studying the patterns of activity within these networks, we should be able to decipher individual memories. Studies have already started to show that this is possible with our existing brain-scanning techniques, and with every positive result, the temptation to use such advances in a practical setting grows.

The courtroom is an obvious candidate, especially because our brains respond differently when it experiences something new compared to something old. You could use brain scans to tell if someone has actually seen a place, person or thing, reliably corroborating the accounts of witnesses and suspects without having to rely on the vagaries of accurate recall and moral fortitude. For this reason, techniques like functional magnetic resonance imaging (fMRI) have been enticingly billed as the ultimate in lie detection technology and claims of "mind-reading machines" and "psychic computers" have abounded in the press.

Putting Claims to the Test

To assess these claims, Rissman asked 16 volunteers who studied 210 faces for four seconds each. An hour later, they saw 400 faces, half of which were old and half were new. They had to separate the two and for each face, they had one of five responses - a certain recollection, or a sense that it was old or new, with either high or low confidence. Throughout these trials, Rissman scanned their brains using fMRI. He used pattern recognition software to analyse the scans and identify patterns of activity that corresponded to each response - a "neural signature" of viewing an old face, or a new one.

At first, his software seemed to be very accurate. When Rissman analysed trials where the recruits correctly classified the faces, his software could separate the old and new faces, based on brain activity alone, with an average accuracy of 83%. In trials where the volunteers were most confident in their judgments, that score rose as high as 95%.

The software passed other challenges too. For trials when the faces were all old (or all new), it separated 'hits' (where the volunteers rightly said that they'd seen the faces before) from 'misses' (where they incorrectly said that the faces were new) with an accuracy of 75%. It could separate trials where the recruits felt certain or confident in their judgments from those where they weren't so confident with accuracies between 79 and 90%. And best of all, the software could even reliably decode the brain scans of one individual after it was "trained" on the data from another.

So far, everything seemed promising. But all of these tests focused on the recruits' subjective memories - what they thought they remembered. If fMRI scans are to be truly admissible in court, they have to do better than that. Scientists must be able to use them to decode a person's objective memories - whether they actually remember something they saw.

To assess that, Rissman focused on trials where the recruits classified a face as old with low confidence. He wanted to see if the software could tell the difference between faces that were actually old and correctly remembered, from those that were actually new and falsely remembered. It succeeded, but only just, with an average accuracy of 59%. For trials where they classified faces as new with low confidence, the program did even worse at telling the right assessments from the wrong ones - guesswork would have been just as good.

Two Interpretations of the Results

These results were bolstered by a variation on the same experiment. Again, Rissman showed seven recruits a set of 210 faces but this time, he told them to rate their attractiveness rather than memorize them. When they saw the larger set of 400 faces, they initially only had to say if they were male or female. The brain scans should still be able to reveal whether they recognized the faces even when they aren't explicitly trying to do so. But they couldn't - it only achieved an accuracy of 56%, not significantly different from a guesswork.

These results are impressive and disappointing at the same time. They demonstrate that fMRI can decode the neural signatures of subjective recognition, at least under controlled laboratory conditions. It also shows that software trained on one person can be used to reliably decode the brain activity of another - that's fascinating in itself because it suggests that these neural signatures are highly consistent from person to person. Vaughan Bell from King's College London (and the excellent Mind Hacks blog) says, "In other words, it is identifying brain activation patterns for conscious experiences of remembering, which seem to generalize across people."

But given all that, the technique's inability to separate what people think they saw from what they actually limits its use as a source of legal evidence. It means that the scans are only as good as the memories of the people who are being scanned, and we know that our memories are fickle and sometimes untrustworthy things.

Bell says, "Any potential fMRI 'lie detector' technology may be equally as liable to the memory distortions that affect eye witness testimony or other forms of courtroom recall. Perhaps a little speculatively, this may mean that although such technology could pick up someone who deliberately lies about what they remember, it may not be able to distinguish between someone whose memory had become distorted over time or who has come to believe false information."

Of course, fMRI scans have already found their way into courtrooms and more attempts are on the horizon. Just last week, a Brooklyn judge dismissed fMRI evidence from an employer-retaliation case, and Wired reports that on May 13, a Tennessee court will hear arguments over the admissibility of fMRI evidence in another hearing. Both cases involved a company called Cephos (whose CEO has, incidentally, turned up on this blog before).

But Rissman's work casts serious doubts over the role of suggests that there are massive barriers to the use of fMRI scans in court. Of course, evidence with dodgy reliability is often used in trials, but the big danger for brain scans is their appearance of reliability. What could be more compelling than a view inside someone's mind? And what could be more dangerous than an unreliable source of evidence that is over-interpreted as being reliable, as the recent Indian case attests to?

Rissman concludes his paper with a stark warning. He says, "The neuroscientific and legal communities must maintain an ongoing dialogue so that any future real-world applications will be based on, and limited by, controlled scientific evaluations that are well understood by the legal system before their use. Although false positives and false negatives can have important implications for memory theory, their consequences can be much more serious within a legal context."

By Ed Yong
Reprinted with permission from Discover

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.