http://www.nytimes.com/2011/01/06/science/06esp.html?_r=4&partner=rss&emc=rss
Full paper here (.pdf)
By BENEDICT CAREY
Published: January 5, 2011
One of psychology’s most respected journals has agreed to publish a paper presenting what its author describes as strong evidence for extrasensory perception, the ability to sense future events.
Work by Daryl J. Bem on extrasensory perception is scheduled to be published this year.
The decision may delight believers in so-called paranormal events, but it is already mortifying scientists. Advance copies of the paper, to be published this year in The Journal of Personality and Social Psychology, have circulated widely among psychological researchers in recent weeks and have generated a mixture of amusement and scorn.
The paper describes nine unusual lab experiments performed over the past decade by its author, Daryl J. Bem, an emeritus professor at Cornell, testing the ability of college students to accurately sense random events, like whether a computer program will flash a photograph on the left or right side of its screen. The studies include more than 1,000 subjects.
Some scientists say the report deserves to be published, in the name of open inquiry; others insist that its acceptance only accentuates fundamental flaws in the evaluation and peer review of research in the social sciences.
“It’s craziness, pure craziness. I can’t believe a major journal is allowing this work in,” Ray Hyman, an emeritus professor of psychology at the University Oregon and longtime critic of ESP research, said. “I think it’s just an embarrassment for the entire field.”
The editor of the journal, Charles Judd, a psychologist at the University of Colorado, said the paper went through the journal’s regular review process. “Four reviewers made comments on the manuscript,” he said, “and these are very trusted people.”
All four decided that the paper met the journal’s editorial standards, Dr. Judd added, even though “there was no mechanism by which we could understand the results.”
But many experts say that is precisely the problem. Claims that defy almost every law of science are by definition extraordinary and thus require extraordinary evidence. Neglecting to take this into account — as conventional social science analyses do — makes many findings look far more significant than they really are, these experts say.
“Several top journals publish results only when these appear to support a hypothesis that is counterintuitive or attention-grabbing,” Eric-Jan Wagenmakers, a psychologist at the University of Amsterdam, wrote by e-mail. “But such a hypothesis probably constitutes an extraordinary claim, and it should undergo more scrutiny before it is allowed to enter the field.”
Dr. Wagenmakers is co-author of a rebuttal to the ESP paper that is scheduled to appear in the same issue of the journal.
In an interview, Dr. Bem, the author of the original paper and one of the most prominent research psychologists of his generation, said he intended each experiment to mimic a well-known classic study, “only time-reversed.”
In one classic memory experiment, for example, participants study 48 words and then divide a subset of 24 of them into categories, like food or animal. The act of categorizing reinforces memory, and on subsequent tests people are more likely to remember the words they practiced than those they did not.
In his version, Dr. Bem gave 100 college students a memory test before they did the categorizing — and found they were significantly more likely to remember words that they practiced later. “The results show that practicing a set of words after the recall test does, in fact, reach back in time to facilitate the recall of those words,” the paper concludes.
In another experiment, Dr. Bem had subjects choose which of two curtains on a computer screen hid a photograph; the other curtain hid nothing but a blank screen.
A software program randomly posted a picture behind one curtain or the other — but only after the participant made a choice. Still, the participants beat chance, by 53 percent to 50 percent, at least when the photos being posted were erotic ones. They did not do better than chance on negative or neutral photos.
“What I showed was that unselected subjects could sense the erotic photos,” Dr. Bem said, “but my guess is that if you use more talented people, who are better at this, they could find any of the photos.”
In recent weeks science bloggers, researchers and assorted skeptics have challenged Dr. Bem’s methods and his statistics, with many critiques digging deep into the arcane but important fine points of crunching numbers. (Others question his intentions. “He’s got a great sense of humor,” said Dr. Hyman, of Oregon. “I wouldn’t rule out that this is an elaborate joke.”)
Dr. Bem has generally responded in kind, sometimes accusing critics of misunderstanding his paper, others times of building a strong bias into their own re-evaluations of his data.
In one sense, it is a historically familiar pattern. For more than a century, researchers have conducted hundreds of tests to detect ESP, telekinesis and other such things, and when such studies have surfaced, skeptics have been quick to shoot holes in them.
But in another way, Dr. Bem is far from typical. He is widely respected for his clear, original thinking in social psychology, and some people familiar with the case say his reputation may have played a role in the paper’s acceptance.
Peer review is usually an anonymous process, with authors and reviewers unknown to one another. But all four reviewers of this paper were social psychologists, and all would have known whose work they were checking and would have been responsive to the way it was reasoned.
Perhaps more important, none were topflight statisticians. “The problem was that this paper was treated like any other,” said an editor at the journal, Laura King, a psychologist at the University of Missouri. “And it wasn’t.”
Many statisticians say that conventional social-science techniques for analyzing data make an assumption that is disingenuous and ultimately self-deceiving: that researchers know nothing about the probability of the so-called null hypothesis.
In this case, the null hypothesis would be that ESP does not exist. Refusing to give that hypothesis weight makes no sense, these experts say; if ESP exists, why aren’t people getting rich by reliably predicting the movement of the stock market or the outcome of football games?
Instead, these statisticians prefer a technique called Bayesian analysis, which seeks to determine whether the outcome of a particular experiment “changes the odds that a hypothesis is true,” in the words of Jeffrey N. Rouder, a psychologist at the University of Missouri who, with Richard D. Morey of the University of Groningen in the Netherlands, has also submitted a critique of Dr. Bem’s paper to the journal.
Physics and biology, among other disciplines, overwhelmingly suggest that Dr. Bem’s experiments have not changed those odds, Dr. Rouder said.
So far, at least three efforts to replicate the experiments have failed. But more are in the works, Dr. Bem said, adding, “I have received hundreds of requests for the materials” to conduct studies.