Comments on Critiques

I received an email from Ivar Fernemo with some questions he still had after careful reading of the GCP website. The issues he raises are of potential interest to others, especially with regard to some of the criticisms made by serious people about the GCP data and analyses.

Dear Roger,

Not long ago we met in Albuquerque; you may remember me being the only foreigner. I have over the last few days to the best of my ability studied the information on your web-site, and I have a few questions, which I would very much appreciate if you found time to respond to ever so briefly. Some of my questions may also be misunderstandings, as I am not very knowledgable in statistics.

I understand the different EGG-designs are based on a physcial (quantum based) device producing a random datastream, that followed by the XOR device.

1. Isn't the effect of the design that the XOR function wipes out any previous deviation from a random pattern?

The XOR removes bias of the mean. The typical design compares each voltage measurement in the analog part of the hardware REG (e.g., the sum of electrons in a sample is measured as a voltage) against a threshold level that is set so that half the voltage measures are higher and half are lower. You can imagine that various influences like temperature and component aging might shift the relative level of the threshold, resulting in a bias. This is what the XOR cancels, logically. The resulting bit-stream is still fully random, and deviations can and do occur, with the statistics predicted by theory (albeit with a possible change in the second order statistics as the price for keeping the first order unbiased.)

2. If so: isn't the conclusion that whatever deviations are registered, they must have happened somwhere and sometime after data was transmitted from the EGG to Princeton? (This question is quites similar to objections raised by Jeff Scargle.) Edward May and James Spottiswoode in their analysis claim - if I have understood them correctly - that no significant effects are found with other choices of time window.

No. As I said, the statistics are really unchanged -- there is variation exactly as before around the *unbiased* mean. Scargle's point is that the XOR must prevent any effect. But he is working from a model that is inappropriate, namely a physical force model, where he conceives the effect (if any) must be "caused" by something like an EM field. Our XOR explicitly precludes such physical fields from affecting the REG, because we do not want it to be vulnerable to spurious sources like radio, power grid, telephone cell radiation, etc. The fact is, we see changes from expectation in the laboratory experiments, the FieldREG experiments, and in the GCP data (and this is not happening after the data are transmitted to Princeton for archiving).

4. How do you view their critisism?

The Spottiswoode and May criticism is itself questionable. They looked at one event (9/11) and they chose to ask whether a half hour more or less would have produced a significant effect. Who knows how they arrived at this specific (post facto) choice? Looking at the actual data, we can see that the measurement began to depart from expectation around the time of the attacks and continued with a strong trend (slope) for 50 hours, while showing usual random variation. Take a look at the first figure on the 9/11 explorations page.

I think that figure shows non-normal data by any reasonable standard, and while it is inappropriate to make formal claims because it does not test a pre-defined hypothesis, it certainly suggests that we need to look at longer spans of time when big events occur (our formal hypothesis specified 4 hours and 10 minutes, based on previous experience with what we regarded as similar events). As for S&M's criticism, if we use the same standards of evidence we apply to our own formal protocols, their claims are vulnerable. Looking at random data after the fact, one can pick a moment to make any point -- to say it is significant, or to say it is not. That is exactly what they have done.

In the main body of the Spottiswoode and May criticism, they confuse the formal, pre-defined trials with the explorations of data we use to provide context and to learn how to formulate better hypotheses. Our formal series of hypothesis tests now has about 225 events, and when they are combined in the equivalent of a meta-analysis, the odds against chance are well beyond the 0.05 level Spottiswoode and May are worried about. Moreover, we have enough events in the formal series now to get a good estimate of the average effect size, equivalent to a Z of 0.3. This means that no single event should be expected to show significance (which was S&M's complaint), but instead that we must patiently assemble many replications if we wish to learn something.


GCP Home