World Library  
Flag as Inappropriate
Email this Article

G-test

Article Id: WHEBN0000914820
Reproduction Date:

Title: G-test  
Author: World Heritage Encyclopedia
Language: English
Subject: List of analyses of categorical data, Pearson's chi-squared test, Chi-squared test, Likelihood-ratio test, Mutual information
Collection: Categorical Data, Statistical Tests
Publisher: World Heritage Encyclopedia
Publication
Date:
 

G-test

In statistics, G-tests are likelihood-ratio or maximum likelihood statistical significance tests that are increasingly being used in situations where chi-squared tests were previously recommended.[1]

The general formula for G is

G = 2\sum_{i} {O_{i} \cdot \ln\left(\frac{O_i}{E_i}\right)},

where Oi is the observed frequency in a cell, Ei is the expected frequency under the null hypothesis, ln denotes the natural logarithm, and the sum is taken over all non-empty cells.

G-tests have been recommended at least since the 1981 edition of the popular statistics textbook by Robert R. Sokal and F. James Rohlf.[2]

Contents

  • Distribution and usage 1
  • Relation to the chi-squared test 2
  • Relation to Kullback–Leibler divergence 3
  • Relation to mutual information 4
  • Application 5
  • Statistical software 6
  • References 7
  • External links 8

Distribution and usage

Given the null hypothesis that the observed frequencies result from random sampling from a distribution with the given expected frequencies, the distribution of G is approximately a chi-squared distribution, with the same number of degrees of freedom as in the corresponding chi-squared test.

For very small samples the multinomial test for goodness of fit, and Fisher's exact test for contingency tables, or even Bayesian hypothesis selection are preferable to the G-test .

Relation to the chi-squared test

The commonly used chi-squared tests for goodness of fit to a distribution and for independence in contingency tables are in fact approximations of the log-likelihood ratio on which the G-tests are based. The general formula for Pearson's chi-squared test statistic is

\chi^2 = \sum_{i} {\frac{\left(O_i - E_i\right)^2}{E_i}} .

The approximation of G by chi squared is obtained by a second order Taylor expansion of the natural logarithm around 1. This approximation was developed by Karl Pearson because at the time it was unduly laborious to calculate log-likelihood ratios. With the advent of electronic calculators and personal computers, this is no longer a problem. A derivation of how the chi-squared test is related to the G-test and likelihood ratios, including to a full Bayesian solution is provided in Hoey (2012).[3]

For samples of a reasonable size, the G-test and the chi-squared test will lead to the same conclusions. However, the approximation to the theoretical chi-squared distribution for the G-test is better than for the Pearson's chi-squared test.[4] In cases where O_i >2 \cdot E_i for some cell case the G-test is always better than the chi-squared test.

For testing goodness-of-fit the G-test is infinitely more efficient than the chi squared test in the sense of Bahadur, but the two tests are equally efficient in the sense of Pitman or in the sense of Hodge and Lehman.[5][6]

Relation to Kullback–Leibler divergence

The G-test quantity is proportional to the Kullback–Leibler divergence of the empirical distribution from the theoretical distribution.

Relation to mutual information

For analysis of contingency tables the value of G can also be expressed in terms of mutual information.

Let

N = \sum_{ij}{O_{ij}} \; , \; \pi_{ij} = \frac{O_{ij}}{N} \; , \; \pi_{i.} = \frac{\sum_j O_{ij}}{N} \; , and \; \pi_{. j} = \frac{\sum_i O_{ij}}{N} \;.

Then G can be expressed in several alternative forms:

G = 2 \cdot N \cdot \sum_{ij}{\pi_{ij} \left( \ln(\pi_{ij})-\ln(\pi_{i.})-\ln(\pi_{.j}) \right)} ,
G = 2 \cdot N \cdot \left[ H(r) + H(c) - H(r,c) \right] ,
G = 2 \cdot N \cdot MI(r,c) \, ,

where the entropy of a discrete random variable X \, is defined as

H(X) = - {\sum_{x \in \text{Supp}(X)} p(x) \log p(x)} \, ,

and where

MI(r,c)= H(r) + H(c) - H(r,c) \,

is the mutual information between the row vector r and the column vector c of the contingency table.

It can also be shown that the inverse document frequency weighting commonly used for text retrieval is an approximation of G applicable when the row sum for the query is much smaller than the row sum for the remainder of the corpus. Similarly, the result of Bayesian inference applied to a choice of single multinomial distribution for all rows of the contingency table taken together versus the more general alternative of a separate multinomial per row produces results very similar to the G statistic.

Application

Statistical software

  • The R programming language has the likelihood.test function in the Deducer package.
  • In SAS, one can conduct G-test by applying the /chisq option after the proc freq.[8]
  • In Stata, one can conduct a G-test by applying the lr option after the tabulate command.
  • Fisher's G-test in the GeneCycle Package of the R programming language (fisher.g.test) does not implement the G-test as described in this article, but rather Fisher's exact test of Gaussian white-noise in a time series.[9]

References

  1. ^ McDonald, J.H. (2014). "G–test of goodness-of-fit". Handbook of Biological Statistics (Third ed.). Baltimore, Maryland: Sparky House Publishing. pp. 53–58. 
  2. ^ Sokal, R. R.; Rohlf, F. J. (1981). Biometry: The Principles and Practice of Statistics in Biological Research (Second ed.). New York: Freeman.  
  3. ^ Hoey, J. (2012). "The Two-Way Likelihood Ratio (G) Test and Comparison to Two-Way Chi-Squared Test". 
  4. ^ Harremoës, P.; Tusnády, G. (2012). "Information divergence is more chi squared distributed than the chi squared statistic". Proceedings ISIT 2012. pp. 538–543. 
  5. ^ Quine, M. P.; Robinson, J. (1985). "Efficiencies of chi-square and likelihood ratio goodness-of-fit tests".  
  6. ^ Harremoës, P.; Vajda, I. (2008). "On the Bahadur-efficient testing of uniformity by means of the entropy".  
  7. ^ Dunning, Ted (1993). "Accurate Methods for the Statistics of Surprise and Coincidence", Computational Linguistics, Volume 19, issue 1 (March, 1993).
  8. ^ G-test of independence, G-test for goodness-of-fit in Handbook of Biological Statistics, University of Delaware. (pp. 46–51, 64–69 in: McDonald, J. H. (2009) Handbook of Biological Statistics (2nd ed.). Sparky House Publishing, Baltimore, Maryland.)
  9. ^ Fisher, R. A. (1929), "Tests of significance in harmonic analysis", Proceedings of the Royal Society of London: Series A, Volume 125, Issue 796, pp. 54–59.

External links

  • /Log-likelihood calculator2G
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 



Copyright © World Library Foundation. All rights reserved. eBooks from World eBook Library are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.