A Collection of Strange Beliefs, Amusing Deceptions, and Dangerous Delusions

From Abracadabra to Zombies

Book Review


The Conscious Universe: The Scientific Truth of Psychic Phenomena

by Dean Radin
(HarperOne 1997)




part seven

Radin starts chapter 6, "Perception at a Distance, "with two “extraordinary stories” [apparent clairvoyant dreams] and says that they “provide the motivation to study whether such experiences are what they appear to be.” Then he jumps right into a discussion of ESP card experiments, many done under the direction of J. B. Rhine at Duke University. (To be followed by a review of remote viewing experiments.)

He begins with Charles Richet, physiologist and Nobel laureate, who experimented in 1889 with hypnosis and guessing the contents of sealed envelops. Radin says that the subject performed at odds far beyond chance. Milbourne Christopher  gives more details  The subject, Leonie B., identified 5 of 25 playing cards when tested in Paris by Richet. However, “when similar tests with Leonie were repeated in London, her score dropped to pure chance average” (Christopher 1970, p. 18).

Radin notes that “it was eventually discovered that psi performance in telepathy tests did not diminish when there was no ‘sender’” (p. 93). This might be because the phenomenon is illusory, not real, but Radin notes that most researchers took this to mean that they should focus on clairvoyance, rather than telepathy. Thus, the history of spirit research went from “survival phenomena, to telepathy research, and then to clairvoyance”….as if this was progress. Radin interprets this progression as evidence for how difficult the topic is and why it took such a long time to figure out that the basic issue was “the nature of psi perception” (p. 93). 

Radin reviews some of the criticisms made of the card experiments: hand shuffling instead of proper randomization procedures, and the physical handling of the cards, which might allow the subject to read the card from impressions on the back of the card. He explains how it took some time before researchers realized that letting the subjects handle the cards or envelopes holding the cards opened the door to cheating. The psi scientists first separated the experimenter and subject by a screen, then put them in separate rooms, and eventually in separate buildings….all to avoid the possibility of cheating or inadvertent communication by sensory cues. 

Radin notes that Kennedy and Uphoff did a study to determine what percentage in Rhine’s experiments might be due to recording errors (they found 1.13% of data were misrecorded in their study (p. 94), not very much. Radin points out how protocols were instituted to reduce or eliminate recording errors by having duplicate recordings and double-blind data checking.

Next, he takes up statistical questions. To his credit, he mentions the practice of optional stopping (though he doesn’t mention optional starting) and how the protocol now is to specify the number of trials to be collected before the experiment begins. 

He mentions there were some criticisms of the validity of the 20% chance assumption, but he doesn’t tell the whole story. Rhine and others used decks of 25 cards with 5 each of the Zener card symbols in a deck. Nobody in his right mind would guess 6 stars in a row, for example, when their are only 5 in the deck, yet in the formal world of probability statistics, 6 stars in a row will occur, as will 7 and 8 in a row, though over a very long haul the chance of any one symbol ever occurring is theoretically 20%. In real life, it is unlikely that a subject would ever guess even 5 squares in a row, though that is physically possible, even a non-mathematician can figure out that the odds of 5 in a row happening are remote. Even 4 in a row might be deemed so unlikely by most people as to rarely be proffered in an experiment, yet theoretically in a large number of trials 4 in a row of each of the five symbols would probably not be that rare. The point is that though the chance of guessing any one of five symbols is theoretically 20%, the chance in a real life experiment with only 25 cards—especially if they’re hand shuffled and not truly randomized—will be somewhat larger. 

Yet, Rhine and other parapsychologists were fond of asserting extraordinary odds against chance of some of their most phenomenal psi stars. However, none of them seem to have considered that the odds they were quoting depend on an assumption of true randomization, something Rhine never attempted. Never mind that it would be impossible using physical decks of cards to achieve true formal probability in any card-guessing experiment.

In the 1930s, a magician by the name of John Mulholland asked Walter Pitkin of Columbia University how to determine the odds against matching pairs with five possible objects. Of course, Pitkin didn’t have a computer to do his dirty work for him so he printed up 200,000 cards, half red and half blue, with 40,000 of each of the five ESP card symbols. The cards were mechanically shuffled and read by a machine. The result was two lists of 100,000 randomly selected symbols. One list would represent chance distribution of the symbols and the other would represent chance guessing of the symbols. How did they match up? It doesn’t matter. What is really interesting is that the actual matches and what would be predicted by accepted odds didn’t match up. The total number was 2% under mathematical expectancy. Runs of 5 matching pairs were 25% under and runs of 7 were 59% greater than mathematical expectancy. The point is not whether these runs are typical in a real world of real randomness or whether they represent some peculiarity of the shuffling machine or some other quirk. The point is that Rhine assumed that statistical probability, which assumes true randomness and a very large number of instances, applies without further consideration to decks of 25 cards shuffled who knows how or how often. 

Rhine and all other psi researchers have assumed that any significant departure from the laws of chance is evidence of something paranormal. There are two problems with this assumption, one logical and one methodological. It either begs the question or commits the fallacy of affirming the consequent: If it’s psi, then the data deviate from chance; the data deviate from chance; so, it’s psi. The assumption is also questionable on methodological grounds. Studies have shown that even when no subjects are used there is significant departure from what would be expected theoretically by chance (Alcock 1981: 159). For example, Harvie “selected 50,000 digits from various sources of random numbers and used them to represent “target cards” in an ESP experiment. Instead of having subjects make guesses, a series of 50,000 random numbers were produced by a computer.” He found a hit rate that was significantly less than what would be predicted by chance “If such significant variation can be produced by comparing random strings with random strings, then the assumption that any significant variation from chance is due to psi seems untenable (Alcock 1981: 158-159).  

Thus, it is a bit of an exaggeration for Radin to claim that statistician Burton Camp “finally settled” the issue of the statistical criticisms (p. 95) when he declared that Rhine’s “statistical analysis is essentially valid” (p. 96). 

In a section called “Results,” Radin lumps together the data from 142 articles on card-guessing experiments published between 1880 and 1940. He claims they represent 3.6 million individual trials by 4,600 subjects in 185 experiments. However, this method of demonstrating that the hit rate was significantly over the 20% chance rate and so “sufficient to settle the question about the existence of psi perception” is unjustified. The studies themselves are questionable due to the problems he’s already mentioned and there is no way these studies are all of equal value. If you combine the data from 185 questionable experiments, you don’t get one big unquestionable result. 

He mentions that the data from just two dozen investigators from 1934-1939 convinced J. J. Eysenck, chairman of the psychology department at the University of London, of the paranormal (p. 96). Eysenck wrote in 1957:

Unless there is a gigantic conspiracy involving some thirty University departments all over the world, and several hundred highly respected scientists n various fields, many of them originally sceptical to the claims of the psychical researchers, the only conclusion that the unbiased observer can come to must be that there does exist a small number of people who obtain knowledge existing either in other people's minds, or in the outer world, by means yet unknown to science.

However, as James Randi noted:

Such a “gigantic conspiracy'” is not at all necessary to explain the fact that some scientists have seen what they expected and wanted to see, and have accepted the conclusions of others without question. The fact that none of the findings of the paranormalists have been established, but like the work of Soal, Levy, Rhine and so many others, have instead fallen out of serious consideration upon subsequent examination, has apparently not altered Dr. Eysenck's conclusion. (Randi 1995: 91)

Radin argues that the reason the data hasn’t moved the world to accept the reality of telepathy or clairvoyance is due to critics bringing up the file-drawer problem.  Not so. The main reason these studies are unconvincing is not because only the positive studies have been published and the negative ones are stuck in the file drawer. The studies themselves suffer from a variety of flaws, including one type of flaw that Radin claims “cannot plausibly explain the results,” namely, sensory leakage. We can agree that the results were probably not due to chance nor to selective reporting. But there are other problems he doesn’t mention, such as poor randomization techniques, inadequate documentation, inadequate controls, sloppy protocols, misuse of statistical methods, use of questionable assumptions, and fraud. 

The mentalist and historian of parapsychology Milbourne Christopher was not as impressed with the data that excited Professor Eysenck. According to Christopher:

 Many brilliant men have investigated the paranormal but they have yet to find a single person who can, without trickery, send or receive even a three-letter word under test conditions (Christopher 1970: 37).

We don’t agree that by lumping together these flawed studies and getting odds against chance in this database on the order of “a billion trillion to one,” (p. 97) proves anything about the reality of psi. 

Radin ends this section with a comment about what is known as the decline effect, that subjects in psi experiments tend to decline in performance with repeated testing. Rather than take this as natural regression toward the mean (over time, all subjects should move toward chance performance), Radin and other parapsychologists explain it away by saying that it is due to the boring nature of the testing. 

The Remote Viewing (RV) Experiments 

The Stanford Research Institute (SRI) was “a scientific think tank affiliated with Stanford University” until the late 1970s when it became the independent SRI international (p. 98). In 1972, physicists Harold Puthoff and Russell Targ founded the SRI remote viewing program. Targ left in 1982; Puthoff in 1985 (Marks 2000: 71). Physicist Edwin May joined SRI in 1975 and became the director of the program when Puthoff left. 

In 1990, the program moved to another “think tank,” Science Applications International Corporation (SAIC), a major defense contractor and a Fortune 500 company with some 38,000 employees worldwide (Marks: 73). 

Radin says the RV program “finally wound down in 1994.” He doesn’t mention that the CIA shut it down because after 24 years of experiments it was clear that remote viewing was of no practical value to the intelligence community (Marks: 75). The CIA report noted that in the case of remote viewing there was a large amount of irrelevant, erroneous information that was provided and there was little agreement observed among the reports of the remote viewers (Marks: 77). Radin doesn’t mention that May objected to the CIA report because it didn’t make note of the fact that he had four independent replications of remote viewing. May didn’t publicize the fact, however, that there were also at least six reported instances of failed replication. 

Radin makes it sound like the government’s money was well spent (somewhere between 20 and 24 million dollars over more than 20 years). It’s easy to understand why remote viewing would be of interest to the military and spy agencies. It is also hard to understand why those agencies would abandon RV if it was as successful as Radin makes out. 

Radin doesn’t evaluate the studies. Rather he pulls out some selective examples of successes, i.e., reports or drawings that were judged to be very accurate. What he doesn’t reveal is that one of the major flaws in all the later RV studies—done under the direction of May—which were better designed and controlled than the ones done by Targ and Puthoff, were fatally flawed because May, the director of the program, was the sole judge of the accuracy of the reports and he conducted the experiments in secret (which made peer review and replication impossible). David Marks says that he tried for years to get May to let him look at his data, but May wouldn’t allow it (Marks 2000). 

There were hundreds, maybe thousands of trials, where a remote viewer would draw something and give a verbal report of what he was seeing. It would be highly unusual if there weren’t some that would seem very accurate for the targets. Since it was never required for success that the drawing or report be exact, it is always possible that an ambiguous image will be seen as fitting a particular target especially if the judge knows what the target is! Furthermore, we have only May’s word for it that the very detailed descriptions that he says were spot on were as he says they were. No independent examiner has ever seen his data, as far as I know. 

Radin is probably correct in claiming that all possible paths for sensory leakage can be controlled for in RV experiments, but he doesn’t mention the actual method used by May to judge the results. Radin notes that “a judge who was blind to the true target looked at the viewer’s response (a sketch and a paragraph or two of verbal description) along with photographs or videos of five possible targets. Four of these targets were decoys and one was the real target” (p. 100). In fact, when this protocol was used by Marks, he was unable to replicate either the RV experiments of Targ and Putoff or those of May. An analysis of the Targ and Putoff experiments was done by Marks and he found that they systematically violated the rule about blind judging. Marks found substantial evidence that Targ and Putoff cued their judges by including dates and references to previous experiments in the transcripts, “enabling the judges to successfully match the transcripts against the list of target sites” (Marks: 57). There were a number of other flaws in the Targ and Putoff experiments detailed by Marks (2000: chapter 3) and Randi (1982: chapter 7), none of which are mentioned by Radin. 

Radin makes it sound like constructive criticisms led researchers to refine their techniques to prevent any cheating or inadvertent cuing, but nothing could be further from the truth. He is correct that May’s positive results of his analysis of all the RV studies done at SRI can’t be explained by chance. But he’s wrong to claim that “design problems couldn’t completely explain away the results” (p. 101). The SRI studies were fatally flawed and could not be replicated (Marks 2000). The SAIC studies (1989-1993) were likewise flawed. 

Radin’s account of the CIA commissioned report is incomplete. It’s true that Jessica Utts and Ray Hyman were the evaluators of the SAIC studies. Utts coauthored several papers with Ed May, so she was not a disinterested party and Hyman is a known skeptic, so he’s not disinterested either. But the CIA wanted a review done quickly and had to pick people knowledgeable of the studies and they wanted a believer and a skeptic for balance, I suppose. They were to focus on two issues: 1. Is there scientific justification for the reality of remote viewing? 2. Is remote viewing of practical use for intelligence gathering?

Utts, a statistician at UC Davis and psi advocate, claimed there was good statistical evidence to support the reality of RV; Hyman disagreed, mainly on the grounds that only one judge was used throughout the experiments and he was the principal investigator.

…given the Principal Investigator’s familiarity with the viewers, the target set, and the experimental procedures, it is possible that subtle, unintentional factors may have influenced the results obtained in these studies. (Marks: 76.)

The report concluded that remote viewing is of little value and the CIA terminated the program known as Star Gate.

Radin describes the SAIC studies as “rigorously controlled sets of experiments that had been supervised by a distinguished oversight committee of experts from a variety of scientific disciplines” (p. 101). But he makes no mention of the fact that May alone judged all the cases and has not let anyone see the data, even though it is all unclassified. And even thought the SRI studies were fatally flawed, the SAIC folks and most believers in psi consider them excellent studies that have proven RV. 

Radin is disingenuous when he says the “government review committee” came to 6 general conclusions. His reference is to Jessica Utt’s article “An assessment of the evidence for psychic functioning” in the Journal of Scientific Exploration. Utts did not represent the government. In any case, the first item she listed was that free-response remote viewing was more successful than forced-choice remote viewing. This hardly seems like a major discovery. 2. Some people performed better than others. 3. Only about 1% of those tested were very good at remote viewing. 4. Training is worthless and RV ability can’t be improved. 5. Feedback seems to enhance performance. And 6, shielding the target made no difference to the quality of RV. 

So, Utts, who is an active researcher in the field, reports that the evidence is in and it’s been replicated. Whereas Hyman, who Radin calls “the devil’s advocate” for some reason, agreed that the effect sizes in the SAIC and ganzfeld studies aren’t likely due to chance, file drawer effect, or inappropriate statistical testing or inferences. But that is not sufficient to warrant claiming that psi has been proved. 

Radin mentions that Julie Milton did an analysis of 78 free-response psi experiments published between 1964 and 1993 and found that “the overall effect resulted in odds against chance of ten million to one” (p. 106). But he doesn’t mention that  only two of the studies had proper safeguards for the crucial protocol of “avoiding giving cues to judges and keeping the experimenter blind to the identity of the target in telepathy and clairvoyance” (Marks: 93). Nor does Radin mention that 26% of the studies failed to provide adequate safeguards regarding the person transcribing the subject’s descriptions being blind to the target’s identity and that this was associated with a significantly higher effect size than the studies that contained this safeguard (Marks: 93-94). Marks reminds us that “statistical significance and real-world importance are not the same thing” (p. 94).

the PEAR studies 

The last set of experiments that Radin reviews are the PEAR studies on “precognitive remote perception” (p. 103). In these studies, the targets are selected after the remote viewer describes the target. Radin gives one example of how successful these studies were and the example indicates the problem with allowing a vague or ambiguous stimulus to be described and then later allowing a biased judge to decide whether there is a fit. The RVer describes being inside a large bowl. The target select later was a radio telescope, which, according to Radin, “resembles a large bowl.” With loose judging standards such as these there is no need to look for other explanations as to why they were able to succeed with odds significantly higher than chance. 

Radin is a believer in the notion that psi is more likely to show up in altered states. He mentions a study by Stanford and Stein that contrasted psi studies under hypnosis vs. ordinary-state studies. Hypnosis studies got better results. But he’s guessing as to why. Nobody really knows. 

Finally, he mentions Gertrude Schmeidler’s “sheep-goat” effect. In this section, he writes something most skeptics could agree with: “Together, culture, experience, and beliefs are portent shapers of our sense of reality. They are, in effect, hidden persuaders, powerful reinforcers of our sense of what is real” (p. 108). 

Given all the gaps in his review of the data, we can hardly accept at face value his claim that the ESP, remote viewing, and other psi tests have no fatal design flaws. I think we can agree that the data are not likely due to chance nor to selective reporting. His claim that some experiments have been replicated “thousands of times” is hollow since replicating flawed studies is not what is needed. His claim that psi effects “measured across the various experiments are remarkably similar to one another” is also hollow, especially since there are so many flaws in so many studies.

end of part seven 


Alcock, James E. (1981). Parapsychology: Science or Magic? Pergamon Press.

Christopher, Milbourne. ESP, Seers & Psychics (Thomas Y. Crowell Co. 1970).

Randi, James. An Encyclopedia of Claims, Frauds, and Hoaxes of the Occult and Supernatural (N.Y.: St. Martin's Press, 1995).

Randi, James. Flim-Flam! Psychics, ESP, Unicorns, and Other Delusions (Buffalo, New York: Prometheus Books,1982).

Marks, David. The Psychology of the Psychic (Buffalo, N.Y.: Prometheus Books, 2000).

Part 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8

more book reviews by R. T. Carroll

* AmeriCares *

The Skeptic's Shop

Other Languages

Print versions available in Estonian , Russian , Japanese , Korean , and (soon) Spanish .


This page was designed by Cristian Popa.