## Statistical SignificanceSuppose there was only one person in group R and one in group X. It would be pretty easy to have a positive experimental result occur by accident. If radishes had no effect at all, there would still be a reasonable chance that the person in group R would have fewer cavities - maybe the person in group X just had a bad checkup. If there are ten people in each group and one group has substantially more people with cavities than the other, we can be pretty confident that the radishes were the deciding factor. While it's common that one person would have more cavities than another by chance, we would not expect chance to result in almost all the people with a low number of cavities to be in the same group. But if 4 people in group R had cavities, as compared with 6 in group X, would it be safe to assume that the radishes had an important effect? Probably not. Even if eating radishes were irrelevant, we wouldn't be surprised to find one group ahead of the other by two. On the other hand, if there were two people with cavities in group R and nine in group X, it would seem pretty unlikely that this could have occurred by chance. When experiments are properly designed, it is typical to
specify a statistical test on the results and only consider the test to
be a success if there is less than 5% (or sometimes 1%) chance that the
result could have occurred by chance. If this occurs, the result is
said to be "statistically significant". It is important to note that
just because the result is
There is an additional rule that needs to be applied if
statistical significance is to be valid. The exact test that is going
to be performed must be determined
Occasionally tests of ESP will come out with a
surprisingly low score, say the subject only guesses 6 right out of 100
when they would average 20 by accident. Sometimes researchers claim
such a result shows there |