In statistics, the pvalue is a function of the observed sample results (a statistic) that is used for testing a statistical hypothesis. More specifically, the pvalue is defined as the probability of obtaining a result equal to or "more extreme" than what was actually observed, assuming that the hypothesis under consideration is true.^{[2]} Here, "more extreme" is dependent on the way the hypothesis is tested. Before the test is performed, a threshold value is chosen, called the significance level of the test, traditionally 5% or 1% ^{[3]} and denoted as α.
If the pvalue is equal to or smaller than the significance level (α), it suggests that the observed data are inconsistent with the assumption that the null hypothesis is true and thus that hypothesis must be rejected (but this does not automatically mean the alternative hypothesis can be accepted as true). When the pvalue is calculated correctly, such a test is guaranteed to control the Type I error rate to be no greater than α.
Since pvalue is used in frequentist inference (and not Bayesian inference), it does not in itself support reasoning about the probabilities of hypotheses but is only as a tool for deciding whether to reject the null hypothesis.
Statistical hypothesis tests making use of pvalues are commonly used in many fields of science and social sciences, such as economics, psychology,^{[4]} biology, criminal justice and criminology, and sociology.^{[5]} Misuse of this tool continues to be the subject of criticism.^{[6]}
Contents

Basic concepts 1

Definition and interpretation 2

Calculation 3

Examples 4

One roll of a pair of dice 4.1

Five heads in a row 4.2

Sample size dependence 4.3

Alternating coin flips 4.4

Impossible outcome and very unlikely outcome 4.5

Coin flipping 4.6

History 5

Misunderstandings 6

Criticisms 7

Related quantities 8

See also 9

Notes 10

References 11

Further reading 12

External links 13
Basic concepts
The pvalue is used in the context of null hypothesis testing in order to quantify the idea of statistical significance of evidence.^{[1]} Null hypothesis testing is a reductio ad absurdum argument adapted to statistics. In essence, a claim is shown to be valid by demonstrating the improbability of the consequence that results from assuming the counterclaim to be true.
As such, the only hypothesis that needs to be specified in this test and which embodies the counterclaim is referred to as the null hypothesis. A result is said to be statistically significant if it can enable the rejection of the null hypothesis. The rejection of the null hypothesis implies that the correct hypothesis lies in the logical complement of the null hypothesis. However, unless there is a single alternative to the null hypothesis, the rejection of null hypothesis does not tell us which of the alternatives might be the correct one.
For instance, if the null hypothesis is assumed to be a standard normal distribution N(0,1), the rejection of this null hypothesis can either mean (i) the mean is not zero, or (ii) the variance is not unity, or (iii) the distribution is not normal, depending on the type of test performed. However, supposing we manage to reject the zero mean hypothesis, the null hypothesis test does not tell us which nonzero value we should adopt as the new mean.
In statistics, a statistical hypothesis refers to a probability distribution that is assumed to govern the observed data.^{[2]} If X is a random variable representing the observed data and H is the statistical hypothesis under consideration, then the notion of statistical significance can be naively quantified by the conditional probability Pr(XH), which gives the likelihood of the observation if the hypothesis is assumed to be correct. However, if X is a continuous random variable and an instance x is observed, Pr(X=xH)=0. Thus, this naive definition is inadequate and needs to be changed so as to accommodate the continuous random variables.
Nonetheless, it helps to clarify that pvalues should not be confused with Pr(HX), the probability of the hypothesis given the data, or Pr(H), the probability of the hypothesis being true, or Pr(X), the probability of observing the given data.
Definition and interpretation
Example of a
pvalue computation. The vertical coordinate is the
probability density of each outcome, computed under the null hypothesis. The
pvalue is the area under the curve past the observed data point.
The pvalue is defined as the probability, under the assumption of hypothesis H, of obtaining a result equal to or more extreme than what was actually observed. Depending on how it is looked at, the "more extreme than what was actually observed" can mean \{ X \geq x \} (righttail event) or \{ X \leq x \} (lefttail event) or the "smaller" of \{ X \leq x\} and \{ X \geq x \} (doubletailed event). Thus, the pvalue is given by

Pr(X \geq x H) for right tail event,

Pr(X \leq x H) for left tail event,

2\min\{Pr(X \leq x H),Pr(X \geq x H)\} for double tail event.
The smaller the pvalue, the larger the significance because it tells the investigator that the hypothesis under consideration may not adequately explain the observation. The hypothesis H is rejected if any of these probabilities is less than or equal to a small, fixed but arbitrarily predefined threshold value \alpha, which is referred to as the level of significance. Unlike the pvalue, the \alpha level is not derived from any observational data and does not depend on the underlying hypothesis; the value of \alpha is instead determined by the consensus of the research community that the investigator is working in.
Since the value of x that defines the left tail or right tail event is a random variable, this makes the pvalue a function of x and a random variable in itself defined uniformly over [0,1] interval, assuming x is continuous. Thus, the pvalue is not fixed. This implies that pvalue cannot be given a frequency counting interpretation since the probability has to be fixed for the frequency counting interpretation to hold. In other words, if the same test is repeated independently bearing upon the same overall null hypothesis, it will yield different pvalues at every repetition. Nevertheless, these different pvalues can be combined using Fisher's combined probability test. It should further be noted that an instantiation of this random pvalue can still be given a frequency counting interpretation with respect to the number of observations taken during a given test, as per the definition, as the percentage of observations more extreme than the one observed under the assumption that the null hypothesis is true.
Lastly, the fixed predefined \alpha level can be interpreted as the rate of falsely rejecting the null hypothesis (or type I error), since

Pr(\mathrm{Reject}\; HH) = Pr(p \leq \alpha) = \alpha.
This also means that if we fix an instantiation of pvalue and allow \alpha to vary over [0,1], we can obtain an equivalent interpretation of pvalue in terms of \alpha level as the lowest value of \alpha that can be assumed for which the null hypothesis can be rejected for a given set of observations. Obviously, assuming an \alpha smaller than the instantiated pvalue will end up not rejecting the null hypothesis.
Calculation
Usually, instead of the actual observations, X is instead a test statistic. A test statistic is a scalar function of all the observations, such as the average or the correlation coefficient, which summarizes the characteristics of the data by a single number, relevant to a particular inquiry. As such, the test statistic follows a distribution determined by the function used to define that test statistic and the distribution of the observational data.
For the important case in which the data are hypothesized to follow the normal distribution, depending on the nature of the test statistic and thus the underlying hypothesis of the test statistic, different null hypothesis tests have been developed. Some such tests are ztest for normal distribution, ttest for Student's tdistribution, ftest for fdistribution. When the data do not follow a normal distribution, it can still be possible to approximate the distribution of these test statistics by a normal distribution by invoking the central limit theorem for large samples, as in the case of Pearson's chisquared test.
Thus computing a pvalue requires a null hypothesis, a test statistic (together with deciding whether the researcher is performing a onetailed test or a twotailed test), and data. Even though computing the test statistic on given data may be easy, computing the sampling distribution under the null hypothesis, and then computing its CDF is often a difficult computation. Today, this computation is done using statistical software, often via numeric methods (rather than exact formulas), but in the early and mid 20th century, this was instead done via tables of values, and one interpolated or extrapolated pvalues from these discrete values. Rather than using a table of pvalues, Fisher instead inverted the CDF, publishing a list of values of the test statistic for given fixed pvalues; this corresponds to computing the quantile function (inverse CDF).
Examples
Here a few simple examples follow, each illustrating a potential pitfall.
One roll of a pair of dice
Suppose a researcher rolls a pair of dice once and assumes a null hypothesis that the dice are fair, not loaded or weighted toward any specific number/roll/result; uniform. The test statistic is "the sum of the rolled numbers" and is onetailed. The researcher rolls the dice and observes that both dice show 6, yielding a test statistic of 12. The pvalue of this outcome is 1/36 (because under the assumption of the null hypothesis, the test statistic is uniformly distributed) or about 0.028 (the highest test statistic out of 6×6 = 36 possible outcomes). If the researcher assumed a significance level of 0.05, this result would be deemed significant and the hypothesis that the dice are fair would be rejected.
In this case, a single roll provides a very weak basis (that is, insufficient data) to draw a meaningful conclusion about the dice. This illustrates the danger with blindly applying pvalue without considering the experiment design.
Five heads in a row
Suppose a researcher flips a coin five times in a row and assumes a null hypothesis that the coin is fair. The test statistic of "total number of heads" can be onetailed or twotailed: a onetailed test corresponds to seeing if the coin is biased towards heads, but a twotailed test corresponds to seeing if the coin is biased either way. The researcher flips the coin five times and observes heads each time (HHHHH), yielding a test statistic of 5. In a onetailed test, this is the most extreme value out of all possible outcomes, and yields a pvalue of (1/2)^{5} = 1/32 ≈ 0.03. If the researcher assumed a significance level of 0.05, this result would be deemed significant and the hypothesis that the coin is fair would be rejected. In a twotailed test, a test statistic of zero heads (TTTTT) is just as extreme and thus the data of HHHHH would yield a pvalue of 2×(1/2)^{5} = 1/16 ≈ 0.06, which is not significant at the 0.05 level.
This demonstrates that specifying a direction (on a symmetric test statistic) halves the pvalue (increases the significance) and can mean the difference between data being considered significant or not.
Sample size dependence
Suppose a researcher flips a coin some arbitrary number of times (n) and assumes a null hypothesis that the coin is fair. The test statistic is the total number of heads and is a twotailed test. Suppose the researcher observes heads for each flip, yielding a test statistic of n and a pvalue of 2/2^{n}. If the coin was flipped only 5 times, the pvalue would be 2/32 = 0.0625, which is not significant at the 0.05 level. But if the coin was flipped 10 times, the pvalue would be 2/1024 ≈ 0.002, which is significant at the 0.05 level.
In both cases the data suggest that the null hypothesis is false (that is, the coin is not fair somehow), but changing the sample size changes the pvalue. In the first case, the sample size is not large enough to allow the null hypothesis to be rejected at the 0.05 level (in fact, the pvalue can never be below 0.05 for the coin example).
This demonstrates that in interpreting pvalues, one must also know the sample size, which complicates the analysis.
Alternating coin flips
Suppose a researcher flips a coin ten times and assumes a null hypothesis that the coin is fair. The test statistic is the total number of heads and is twotailed. Suppose the researcher observes alternating heads and tails with every flip (HTHTHTHTHT). This yields a test statistic of 5 and a pvalue of 1 (completely unexceptional), as that is the expected number of heads.
Suppose instead that the test statistic for this experiment was the "number of alternations" (that is, the number of times when H followed T or T followed H), which is again twotailed. That would yield a test statistic of 9, which is extreme and has a pvalue of 1/2^8 = 1/256 \approx 0.0039. That would be considered extremely significant, well beyond the 0.05 level. These data indicate that, in terms of one test statistic, the data set is extremely unlikely to have occurred by chance, but it does not suggest that the coin is biased towards heads or tails.
By the first test statistic, the data yield a high pvalue, suggesting that the number of heads observed is not unlikely. By the second test statistic, the data yield a low pvalue, suggesting that the pattern of flips observed is very, very unlikely. There is no "alternative hypothesis" (so only rejection of the null hypothesis is possible) and such data could have many causes. The data may instead be forged, or the coin may be flipped by a magician who intentionally alternated outcomes.
This example demonstrates that the pvalue depends completely on the test statistic used and illustrates that pvalues can only help researchers to reject a null hypothesis, not consider other hypotheses.
Impossible outcome and very unlikely outcome
Suppose a researcher flips a coin two times and assumes a null hypothesis that the coin is unfair: both sides are heads. The test statistic is the total number of heads (onetailed). The researcher observes one head and one tail (HT), yielding a test statistic of 1 and a pvalue of 0. In this case the data is inconsistent with the hypothesis; for a twoheaded coin, a tail can never come up. In that case, the outcome is not simply unlikely in the null hypothesis but in fact impossible, and the null hypothesis can be definitely rejected as false. In practice, such experiments almost never occur, as all data that could be observed would be possible in the null hypothesis (albeit unlikely).
If the null hypothesis were instead that the coin came up heads 99% of the time (otherwise the same setup), the pvalue would instead be^{[3]} 0.0199 \approx 0.02. In that case, the null hypothesis could not definitely be ruled out (this outcome is unlikely in the null hypothesis but possible), but the null hypothesis would be rejected at the 0.05 level (in fact at the 0.02 level) since the outcome is less than 2% likely in the null hypothesis.
Coin flipping
As an example of a statistical test, an experiment is performed to determine whether a coin flip is fair (equal chance of landing heads or tails) or unfairly biased (one outcome being more likely than the other).
Suppose that the experimental results show the coin turning up heads 14 times out of 20 total flips. The null hypothesis is that the coin is fair, and the test statistic is the number of heads. If a righttailed test is considered, the pvalue of this result is the chance of a fair coin landing on heads at least 14 times out of 20 flips. That probability can be computed from binomial coefficients as

\begin{align} & \operatorname{Prob}(14\text{ heads}) + \operatorname{Prob}(15\text{ heads}) + \cdots + \operatorname{Prob}(20\text{ heads}) \\ & = \frac{1}{2^{20}} \left[ \binom{20}{14} + \binom{20}{15} + \cdots + \binom{20}{20} \right] = \frac{60,\!460}{1,\!048,\!576} \approx 0.058 \end{align}
This probability is the pvalue, considering only extreme results that favor heads. This is called a onetailed test. However, the deviation can be in either direction, favoring either heads or tails. The twotailed pvalue, which considers deviations favoring either heads or tails, may instead be calculated. As the binomial distribution is symmetrical for a fair coin, the twosided pvalue is simply twice the above calculated singlesided pvalue: the twosided pvalue is 0.115.
In the above example:

Null hypothesis (H_{0}): The coin is fair, with Prob(heads) = 0.5

Test statistic: Number of heads

Level of significance: 0.05

Observation O: 14 heads out of 20 flips; and

Twotailed pvalue of observation O given H_{0} = 2*min(Prob(no. of heads ≥ 14 heads), Prob(no. of heads ≤ 14 heads))= 2*min(0.058, 0.978) = 2*0.058 = 0.115.
Note that the Prob(no. of heads ≤ 14 heads) = 1  Prob(no. of heads ≥ 14 heads) + Prob(no. of head = 14) = 1  0.058 + 0.036 = 0.978; however, symmetry of the binomial distribution makes that an unnecessary computation to find the smaller of the two probabilities. c Here, the calculated pvalue exceeds 0.05, so the observation is consistent with the null hypothesis, as it falls within the range of what would happen 95% of the time were the coin is in fact fair. Hence, the null hypothesis at the 5% level is not rejected. Although the coin did not fall evenly, the deviation from expected outcome is small enough to be consistent with chance.
However, had one more head been obtained, the resulting pvalue (twotailed) would have been 0.0414 (4.14%). The null hypothesis (that the observed result of 15 heads out of 20 flips can be ascribed to chance alone) is rejected when a 5% cutoff is used.
History
PierreSimon Laplace
Biologist and statistician Ronald Fisher
Computations of pvalues date back to the 1770s, where they were calculated by PierreSimon Laplace:
The pvalue was first formally introduced by Karl Pearson, in his Pearson's chisquared test, using the chisquared distribution and notated as capital P. The pvalues for the chisquared distribution (for various values of χ^{2} and degrees of freedom), now notated as P, was calculated in (Elderton 1902), collected in (Pearson 1914, pp. xxxi–xxxiii, 26–28, Table XII).
The use of the pvalue in statistics was popularized by Ronald Fisher, and it plays a central role in his approach to the subject. In his influential book Statistical Methods for Research Workers (1925), Fisher proposes the level p = 0.05, or a 1 in 20 chance of being exceeded by chance, as a limit for statistical significance, and applies this to a normal distribution (as a twotailed test), thus yielding the rule of two standard deviations (on a normal distribution) for statistical significance (see 68–95–99.7 rule).^{[4]}
He then computes a table of values, similar to Elderton but, importantly, reverses the roles of χ^{2} and p. That is, rather than computing p for different values of χ^{2} (and degrees of freedom n), he computes values of χ^{2} that yield specified pvalues, specifically 0.99, 0.98, 0.95, 0,90, 0.80, 0.70, 0.50, 0.30, 0.20, 0.10, 0.05, 0.02, and 0.01.^{[13]} That allowed computed values of χ^{2} to be compared against cutoffs and encouraged the use of pvalues (especially 0.05, 0.02, and 0.01) as cutoffs, instead of computing and reporting pvalues themselves. The same type of tables were then compiled in (Fisher & Yates 1938), which cemented the approach.
As an illustration of the application of pvalues to the design and interpretation of experiments, in his following book The Design of Experiments (1935), Fisher presented the lady tasting tea experiment, which is the archetypal example of the pvalue.
To evaluate a lady's claim that she (Muriel Bristol) could distinguish by taste how tea is prepared (first adding the milk to the cup, then the tea, or first tea, then milk), she was sequentially presented with 8 cups: 4 prepared one way, 4 prepared the other, and asked to determine the preparation of each cup (knowing that there were 4 of each). In that case, the null hypothesis was that she had no special ability, the test was Fisher's exact test, and the pvalue was 1/\binom{8}{4} = 1/70 \approx 0.014, so Fisher was willing to reject the null hypothesis (consider the outcome highly unlikely to be due to chance) if all were classified correctly. (In the actual experiment, Bristol correctly classified all 8 cups.)
Fisher reiterated the p = 0.05 threshold and explained its rationale, stating:
He also applies this threshold to the design of experiments, noting that had only 6 cups been presented (3 of each), a perfect classification would have only yielded a pvalue of 1/\binom{6}{3} = 1/20 = 0.05, which would not have met this level of significance. Fisher also underlined the frequentist interpretation of p, as the longrun proportion of values at least as extreme as the data, assuming the null hypothesis is true.
In later editions, Fisher explicitly contrasted the use of the pvalue for statistical inference in science with the Neyman–Pearson method, which he terms "Acceptance Procedures".^{[16]} Fisher emphasizes that while fixed levels such as 5%, 2%, and 1% are convenient, the exact pvalue can be used, and the strength of evidence can and will be revised with further experimentation. In contrast, decision procedures require a clearcut decision, yielding an irreversible action, and the procedure is based on costs of error, which, he argues, are inapplicable to scientific research.
Misunderstandings
Despite the ubiquity of pvalue tests, this particular test for statistical significance has been criticized for its inherent shortcomings and the potential for misinterpretation.
The data obtained by comparing the pvalue to a significance level will yield one of two results: either the null hypothesis is rejected, or the null hypothesis cannot be rejected at that significance level (which however does not imply that the null hypothesis is true). In Fisher's formulation, there is a disjunction: a low pvalue means either that the null hypothesis is true and a highly improbable event has occurred or that the null hypothesis is false.
However, people interpret the pvalue in many incorrect ways and try to draw other conclusions from pvalues, which do not follow.
The pvalue does not in itself allow reasoning about the probabilities of hypotheses, which requires multiple hypotheses or a range of hypotheses, with a prior distribution of likelihoods between them, as in Bayesian statistics. There, one uses a likelihood function for all possible values of the prior instead of the pvalue for a single null hypothesis.
The pvalue refers only to a single hypothesis, called the null hypothesis and does not make reference to or allow conclusions about any other hypotheses, such as the alternative hypothesis in Neyman–Pearson statistical hypothesis testing. In that approach,one instead has a decision function between two alternatives, often based on a test statistic, and computes the rate of Type I and type II errors as α and β. However, the pvalue of a test statistic cannot be directly compared to these error rates α and β. Instead, it is fed into a decision function.
There are several common misunderstandings about pvalues.^{[17]}^{[18]}

The pvalue is not the probability that the null hypothesis is true or the probability that the alternative hypothesis is false. It is not connected to either. In fact, frequentist statistics does not and cannot attach probabilities to hypotheses. Comparison of Bayesian and classical approaches shows that a pvalue can be very close to zero and the posterior probability of the null is very close to unity (if there is no alternative hypothesis with a large enough a priori probability that would explain the results more easily), Lindley's paradox. There are also a priori probability distributions in which the posterior probability and the pvalue have similar or equal values.^{[19]}

The pvalue is not the probability that a finding is "merely a fluke." Calculating the pvalue is based on the assumption that every finding is a fluke, the product of chance alone. The phrase "the results are due to chance" is used to mean that the null hypothesis is probably correct. However, that is merely a restatement of the inverse probability fallacy since the pvalue cannot be used to figure out the probability of a hypothesis being true.

The pvalue is not the probability of falsely rejecting the null hypothesis. That error is a version of the socalled prosecutor's fallacy.

The pvalue is not the probability that replicating the experiment would yield the same conclusion. Quantifying the replicability of an experiment was attempted through the concept of prep.

The significance level, such as 0.05, is not determined by the pvalue. Rather, the significance level is decided by the person conducting the experiment (with the value 0.05 widely used by the scientific community) before the data are viewed, and it is compared against the calculated pvalue after the test has been performed. (However, reporting a pvalue is more useful than simply saying that the results were or were not significant at a given level and allows readers to decide for themselves whether to consider the results significant.)

The pvalue does not indicate the size or importance of the observed effect. The two vary together, however, and the larger the effect, the smaller the sample size that will be required to get a significant pvalue (see effect size).
Criticisms
Critics of pvalues point out that the criterion used to decide "statistical significance" is based on an arbitrary choice of level (often set at 0.05), and that this criterion leads to an alarming number of false positive tests. If one defines a false positive rate as the fraction of all “statistically significant” tests in which the null hypothesis is actually true, several arguments suggest that this is at least about 30 percent for p values that are close to 0.05. In order to arrive at this number, one needs to postulate something about the prior probability that a real effect exists. However the conclusion is robust in the sense that regardless of what prior distribution is postulated, the null hypothesis will be rejected, wrongly, far more than 5 percent of the time.^{[20]}^{[21]}^{[22]} Simulation of t tests shows that, if we observe p = 0.047 in a single test, and claim to have made a discovery, that claim will be wrong at least 26 percent of the time, and much more often if the hypothesis is implausible.^{[22]} This fact alone probably makes a substantial contribution to the alarming irreproducibility of experimental results that has been observed in some areas,^{[23]} even before one gets to problems caused by multiple comparisons, phacking and other wellknown sources of false discoveries. This has led to calls to use a smaller pvalue as a criterion, eg p = 0.005 or 0.001.^{[20]}^{[21]}^{[24]}
The pvalue is incompatible with the likelihood principle and depends on the experiment design, the test statistic in question. That is, the definition of "more extreme" data depends on the sampling methodology adopted by the investigator;^{[25]} for example, the situation in which the investigator flips the coin 100 times, yielding 50 heads, has a set of extreme data that is different from the situation in which the investigator continues to flip the coin until 50 heads are achieved yielding 100 flips.^{[26]} That is to be expected, as the experiments are different experiments, and the sample spaces and the probability distributions for the outcomes are different even though the observed data (50 heads out of 100 flips) are the same for the two experiments.
Fisher proposed p as an informal measure of evidence against the null hypothesis. He called on researchers to combine p in the mind with other types of evidence for and against that hypothesis such as the a priori plausibility of the hypothesis and the relative strengths of results from previous studies.
In very rare cases, the use of pvalues has been banned by certain journals.^{[28]}
Related quantities
A closely related concept is the Evalue,^{[29]} which is the expected number of times in multiple testing that one expects to obtain a test statistic at least as extreme as the one that was actually observed if one assumes that the null hypothesis is true. The Evalue is the product of the number of tests and the pvalue.
See also
Notes

^ Note that the statistical significance of a result does not imply that the result is scientifically significant as well.

^ It should be noted that a statistical hypothesis is conceptually different from a scientific hypothesis.

^ Odds of TT is (0.01)^2, odds of HT and TH are 0.99 \times 0.01 and 0.01 \times 0.99, which are equal, and adding these yield 0.01^2 + 2\times 0.01 \times 0.99 = 0.0199

^ To be precise the p = 0.05 corresponds to about 1.96 standard deviations for a normal distribution (twotailed test), and 2 standard deviations corresponds to about a 1 in 22 chance of being exceeded by chance, or p ≈ 0.045; Fisher notes these approximations.
References

^ Hubbard, R. (2004). Blurring the Distinctions Between p’s and a’s in Psychological Research, Theory Psychology June 2004 vol. 14 no. 3 295327

^ Nuzzo, R. (2014). "Scientific method: Statistical errors". Nature 506 (7487): 150.

^ Wetzels, R.; Matzke, D.; Lee, M. D.; Rouder, J. N.; Iverson, G. J.; Wagenmakers, E. J. (2011). "Statistical Evidence in Experimental Psychology: An Empirical Comparison Using 855 t Tests". Perspectives on Psychological Science 6 (3): 291.

^ Babbie, E. (2007). The practice of social research 11th ed. Thomson Wadsworth: Belmont, California.

^ "Scientists Perturbed by Loss of Stat Tool to Sift Research Fudge from Fact". Scientific American. April 16, 2015.

^ Fisher 1925, pp. 78–79, 98, Chapter ^{2}χIV. Tests of Goodness of Fit, Independence and Homogeneity; with Table of , ^{2}χTable III. Table of .

^ Fisher 1971, Section 12.1 Scientific Inference and Acceptance Procedures.

^ Sterne, J. A. C.; Smith, G. Davey (2001). "Sifting the evidence–what's wrong with significance tests?". BMJ (Clinical research ed.) 322 (7280): 226–231.

^ Schervish, M. J. (1996). "P Values: What They Are and What They Are Not".

^ Casella, George; Berger, Roger L. (1987). "Reconciling Bayesian and Frequentist Evidence in the OneSided Testing Problem". Journal of the American Statistical Association 82 (397): 106–111.

^ ^{a} ^{b} Sellke, Thomas; Bayarri, M. J.; Berger, James O. (2001). "Calibration of p Values for Testing Precise Null Hypotheses".

^ ^{a} ^{b} Johnson, Valen (2013). "Revised standards for statistical evidence". Proceedings of the National Academy of Sciences USA 110: 19313–19317.

^ ^{a} ^{b} Colquhoun, David (2015). "An investigation of the false discovery rate and the misinterpretation of pvalues". Royal Society Open Science 1: 140216.

^ Open Science Collaboration (2015). "Estimating the reproducibility of psychological science". Science 349: aac4716.

^ Colquhoun, David. values"p"Comment on interpretation of . Royal Society Open Science.

^ Casson, R. J. (2011). "The pesty P value". Clinical & Experimental Ophthalmology 39 (9): 849–850.

^ Johnson, D. H. (1999). "The Insignificance of Statistical Significance Testing". Journal of Wildlife Management 63 (3): 763–772.

^ Nuzzo, Regina. "Scientists Perturbed by Loss of Stat Tool to Sift Research Fudge from Fact". scientificamerican.com. Scientific American. Retrieved 22 April 2015.

^ National Institutes of Health definition of Evalue
Further reading





Fisher, R. A.; Yates, F. (1938). Statistical tables for biological, agricultural and medical research. London.


Hubbard, Raymond; Bayarri, M. J. (November 2003), P Values are not Error Probabilities (PDF), a working paper that explains the difference between Fisher's evidential pvalue and the Neyman–Pearson Type I error rate α.

Hubbard, Raymond;

Hubbard, Raymond; Lindsay, R. Murray (2008). Values Are Not a Useful Measure of Evidence in Statistical Significance Testing"P"Why (PDF). Theory & Psychology 18 (1): 69–88.


Dallal, Gerard E. (2012). The Little Handbook of Statistical Practice.

Biau, D.J.; Jolles, B.M.; Porcher, R. (March 2010). "P value and the theory of hypothesis testing: an explanation for new researchers.". Clin Orthop Relat Res. 463 (3): 885–892.
Links

12 Misconceptions, good overview given in following Article

valuepPresentation about the
External links

values calculatorspFree online for various specific tests (chisquare, Fisher's Ftest, etc.).

valuespUnderstanding , including a Java applet that illustrates how the numerical values of pvalues can give quite misleading impressions about the truth or falsity of the hypothesis under test.
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.