World Library  
Flag as Inappropriate
Email this Article

Multiple comparisons problem

Article Id: WHEBN0009444220
Reproduction Date:

Title: Multiple comparisons problem  
Author: World Heritage Encyclopedia
Language: English
Subject: Analysis of clinical trials, Analysis of variance, Kruskal–Wallis one-way analysis of variance, Big data
Collection:
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Multiple comparisons problem

In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously[1] or infers a subset of parameters selected based on the observed values.[2] It is also known as the look-elsewhere effect.

Errors in inference, including confidence intervals that fail to include their corresponding population parameters or hypothesis tests that incorrectly reject the null hypothesis, are more likely to occur when one considers the set as a whole. Several statistical techniques have been developed to prevent this from happening, allowing significance levels for single and multiple comparisons to be directly compared. These techniques generally require a higher significance threshold for individual comparisons, so as to compensate for the number of inferences being made.

History

The interest in the problem of multiple comparisons began in the 1950s with the work of Tukey and Scheffé. New methods and procedures came out: Closed testing procedure (Marcus et al., 1976), Holm–Bonferroni method (1979). Later, in the 1980s, the issue of multiple comparisons came back (Hochberg and Tamhane (1987), Westfall and Young (1993), and Hsu (1996)). In 1995 the work on False discovery rate and other new ideas began. In 1996 the first conference on multiple comparisons took place in Israel. This was followed by conferences around the world: Berlin (2000), Bethesda (2002), Shanghai (2005), Vienna (2007), and Tokyo (2009). All these reflect an acceleration of increase of interest in multiple comparisons.[3]

The problem

In this context the term "comparisons" refers to comparisons of two groups, such as a treatment group and a control group. "Multiple comparisons" arise when a statistical analysis encompasses a number of formal comparisons, with the presumption that attention will focus on the strongest differences among all comparisons that are made. Failure to compensate for multiple comparisons can have important real-world consequences, as illustrated by the following examples.

  • Suppose the treatment is a new way of teaching writing to students, and the control is the standard way of teaching writing. Students in the two groups can be compared in terms of grammar, spelling, organization, content, and so on. As more attributes are compared, it becomes more likely that the treatment and control groups will appear to differ on at least one attribute by random chance alone.
  • Suppose we consider the efficacy of a drug in terms of the reduction of any one of a number of disease symptoms. As more symptoms are considered, it becomes more likely that the drug will appear to be an improvement over existing drugs in terms of at least one symptom.
  • Suppose we consider the safety of a drug in terms of the occurrences of different types of side effects. As more types of side effects are considered, it becomes more likely that the new drug will appear to be less safe than existing drugs in terms of at least one side effect.

In all three examples, as the number of comparisons increases, it becomes more likely that the groups being compared will appear to differ in terms of at least one attribute. Our confidence that a result will generalize to independent data should generally be weaker if it is observed as part of an analysis that involves multiple comparisons, rather than an analysis that involves only a single comparison.

For example, if one test is performed at the 5% level, there is only a 5% chance of incorrectly rejecting the null hypothesis if the null hypothesis is true. However, for 100 tests where all null hypotheses are true, the expected number of incorrect rejections is 5. If the tests are independent, the probability of at least one incorrect rejection is 99.4%. These errors are called false positives or Type I errors.

The problem also occurs for confidence intervals, note that a single confidence interval with 95% coverage probability level will likely contain the population parameter it is meant to contain, i.e. in the long run 95% of confidence intervals built in that way will contain the true population parameter. However, if one considers 100 confidence intervals simultaneously, with coverage probability 0.95 each, it is highly likely that at least one interval will not contain its population parameter. The expected number of such non-covering intervals is 5, and if the intervals are independent, the probability that at least one interval does not contain the population parameter is 99.4%.

Techniques have been developed to control the false positive error rate associated with performing multiple statistical tests. Similarly, techniques have been developed to adjust confidence intervals so that the probability of at least one of the intervals not covering its target value is controlled.

Classification of m hypothesis tests

The following table gives a number of errors committed when testing m null hypotheses. It defines some random variables that are related to the m hypothesis tests.

Null hypothesis is True (H0) Alternative hypothesis is True (H1) Total
Declared significant V S R
Declared non-significant U T m - R
Total m_0 m - m_0 m

Example: Flipping coins

For example, one might declare that a coin was biased if in 10 flips it landed heads at least 9 times. Indeed, if one assumes as a null hypothesis that the coin is fair, then the probability that a fair coin would come up heads at least 9 out of 10 times is (10 + 1) × (1/2)10 = 0.0107. This is relatively unlikely, and under statistical criteria such as p-value < 0.05, one would declare that the null hypothesis should be rejected — i.e., the coin is unfair.

A multiple-comparisons problem arises if one wanted to use this test (which is appropriate for testing the fairness of a single coin), to test the fairness of many coins. Imagine if one were to test 100 fair coins by this method. Given that the probability of a fair coin coming up 9 or 10 heads in 10 flips is 0.0107, one would expect that in flipping 100 fair coins ten times each, to see a particular (i.e., pre-selected) coin comes up heads 9 or 10 times would still be very unlikely, but seeing any coin behave that way, without concern for which one, would be more likely than not. Precisely, the likelihood that all 100 fair coins are identified as fair by this criterion is (1 − 0.0107)100 ≈ 0.34. Therefore, the application of our single-test coin-fairness criterion to multiple comparisons would be more likely to falsely identify at least one fair coin as unfair.

What can be done

For hypothesis testing, the problem of multiple comparisons (also known as the multiple testing problem) results from the increase in type I error that occurs when statistical tests are used repeatedly. If k independent comparisons are performed, the experiment-wide significance level \bar{\alpha}, also termed FWER for family-wise error rate, is given by

\bar{\alpha} = 1-\left( 1-\alpha_\mathrm{\{per\ comparison\}} \right)^k.

Hence, unless the tests are perfectly dependent, \bar{\alpha} increases as the number of comparisons increases. If we do not assume that the comparisons are independent, then we can still say:

\bar{\alpha} \le k \cdot \alpha_\mathrm{\{per\ comparison\}},

which follows from Boole's inequality. Example: 0.2649=1-\left( 1-.05 \right)^6 \le .05 \times 6=0.3

There are different ways to assure that the family-wise error rate is at most \bar{\alpha}. The most conservative method, but which is free of dependence and distributional assumptions, is the Bonferroni correction \alpha_\mathrm{\{per\ comparison\}}=\bar{\alpha}/k.

A more accurate correction can be obtained by solving the equation for the family-wise error rate of k independent comparisons for \alpha_\mathrm{\{per\ comparison\}}. This yields \alpha_\mathrm{\{per\ comparison\}}=1-{\left(1-\bar{\alpha}\right)}^{\frac{1}{k}}, which is known as the Šidák correction. Another procedure is the Holm–Bonferroni method, which uniformly delivers more power than the simple Bonferroni correction, by testing only the most extreme p-value (i=1) against the strictest criterion, and the others (i>1) against progressively less strict criteria.[4] \alpha_\mathrm{\{per\ comparison\}}=\bar{\alpha}/(k-i+1).

Methods

Multiple testing correction refers to re-calculating probabilities obtained from a statistical test which was repeated multiple times. In order to retain a prescribed family-wise error rate α in an analysis involving more than one comparison, the error rate for each comparison must be more stringent than α. Boole's inequality implies that if each of k tests is performed to have type I error rate α/k, the total error rate will not exceed α. This is called the Bonferroni correction, and is one of the most commonly used approaches for multiple comparisons.

In some situations, the Bonferroni correction is substantially conservative, i.e., the actual family-wise error rate is much less than the prescribed level α. This occurs when the test statistics are highly dependent (in the extreme case where the tests are perfectly dependent, the family-wise error rate with no multiple comparisons adjustment and the per-test error rates are identical). For example, in fMRI analysis,[5][6] tests are done on over 100,000 voxels in the brain. The Bonferroni method would require p-values to be smaller than .05/100000 to declare significance. Since adjacent voxels tend to be highly correlated, this threshold is generally too stringent.

Because simple techniques such as the Bonferroni method can be too conservative, there has been a great deal of attention paid to developing better techniques, such that the overall rate of false positives can be maintained without inflating the rate of false negatives unnecessarily. Such methods can be divided into general categories:

  • Methods where total alpha can be proved to never exceed 0.05 (or some other chosen value) under any conditions. These methods provide "strong" control against Type I error, in all conditions including a partially correct null hypothesis.
  • Methods where total alpha can be proved not to exceed 0.05 except under certain defined conditions.
  • Methods which rely on an omnibus test before proceeding to multiple comparisons. Typically these methods require a significant ANOVA/Tukey's range test before proceeding to multiple comparisons. These methods have "weak" control of Type I error.
  • Empirical methods, which control the proportion of Type I errors adaptively, utilizing correlation and distribution characteristics of the observed data.

The advent of computerized resampling methods, such as bootstrapping and Monte Carlo simulations, has given rise to many techniques in the latter category. In some cases where exhaustive permutation resampling is performed, these tests provide exact, strong control of Type I error rates; in other cases, such as bootstrap sampling, they provide only approximate control.

Post-hoc testing of ANOVAs

Multiple comparison procedures are commonly used in an analysis of variance after obtaining a significant omnibus test result, like the ANOVA F-test. The significant ANOVA result suggests rejecting the global null hypothesis H0 that the means are the same across the groups being compared. Multiple comparison procedures are then used to determine which means differ. In a one-way ANOVA involving K group means, there are K(K − 1)/2 pairwise comparisons.

A number of methods have been proposed for this problem, some of which are:

Single-step procedures
Multi-step procedures based on Studentized range statistic

Choosing the most appropriate multiple-comparison procedure for your specific situation is not easy. Many tests are available, and they differ in a number of ways.[7]

For example,if the variances of the groups being compared are similar, the Tukey–Kramer method is generally viewed as performing optimally or near-optimally in a broad variety of circumstances.[8] The situation where the variance of the groups being compared differ is more complex, and different methods perform well in different circumstances.

The Kruskal–Wallis test is the non-parametric alternative to ANOVA. Multiple comparisons can be done using pairwise comparisons (for example using Wilcoxon rank sum tests) and using a correction to determine if the post-hoc tests are significant (for example a Bonferroni correction).

Large-scale multiple testing

Traditional methods for multiple comparisons adjustments focus on correcting for modest numbers of comparisons, often in an analysis of variance. A different set of techniques have been developed for "large-scale multiple testing", in which thousands or even greater numbers of tests are performed. For example, in genomics, when using technologies such as microarrays, expression levels of tens of thousands of genes can be measured, and genotypes for millions of genetic markers can be measured. Particularly in the field of genetic association studies, there has been a serious problem with non-replication — a result being strongly statistically significant in one study but failing to be replicated in a follow-up study. Such non-replication can have many causes, but it is widely considered that failure to fully account for the consequences of making multiple comparisons is one of the causes.

In different branches of science, multiple testing is handled in different ways. It has been argued that if statistical tests are only performed when there is a strong basis for expecting the result to be true, multiple comparisons adjustments are not necessary.[9] It has also been argued that use of multiple testing corrections is an inefficient way to perform empirical research, since multiple testing adjustments control false positives at the potential expense of many more false negatives. On the other hand, it has been argued that advances in measurement and information technology have made it far easier to generate large datasets for exploratory analysis, often leading to the testing of large numbers of hypotheses with no prior basis for expecting many of the hypotheses to be true. In this situation, very high false positive rates are expected unless multiple comparisons adjustments are made.[10]

For large-scale testing problems where the goal is to provide definitive results, the familywise error rate remains the most accepted parameter for ascribing significance levels to statistical tests. Alternatively, if a study is viewed as exploratory, or if significant results can be easily re-tested in an independent study, control of the false discovery rate (FDR)[11][12][13] is often preferred. The FDR, defined as the expected proportion of false positives among all significant tests, allows researchers to identify a set of "candidate positives", of which a high proportion are likely to be true. The false positives within the candidate set can then be identified in a follow-up study.

Assessing whether any alternative hypotheses are true

A normal quantile plot for a simulated set of test statistics that have been standardized to be Z-scores under the null hypothesis. The departure of the upper tail of the distribution from the expected trend along the diagonal is due to the presence of substantially more large test statistic values than would be expected if all null hypotheses were true. The red point corresponds to the fourth largest observed test statistic, which is 3.13, versus an expected value of 2.06. The blue point corresponds to the fifth smallest test statistic, which is -1.75, versus an expected value of -1.96. The graph suggests that it is unlikely that all the null hypotheses are true, and that most or all instances of a true alternative hypothesis result from deviations in the positive direction.

A basic question faced at the outset of analyzing a large set of testing results is whether there is evidence that any of the alternative hypotheses are true. One simple meta-test that can be applied when it is assumed that the tests are independent of each other is to use the Poisson distribution as a model for the number of significant results at a given level α that would be found when all null hypotheses are true. If the observed number of positives is substantially greater than what should be expected, this suggests that there are likely to be some true positives among the significant results. For example, if 1000 independent tests are performed, each at level α = 0.05, we expect 50 significant tests to occur when all null hypotheses are true. Based on the Poisson distribution with mean 50, the probability of observing more than 61 significant tests is less than 0.05, so if we observe more than 61 significant results, it is very likely that some of them correspond to situations where the alternative hypothesis holds. A drawback of this approach is that it over-states the evidence that some of the alternative hypotheses are true when the test statistics are positively correlated, which commonly occurs in practice. require('Module:No globals')

local p = {}

-- articles in which traditional Chinese preceeds simplified Chinese local t1st = { ["228 Incident"] = true, ["Chinese calendar"] = true, ["Lippo Centre, Hong Kong"] = true, ["Republic of China"] = true, ["Republic of China at the 1924 Summer Olympics"] = true, ["Taiwan"] = true, ["Taiwan (island)"] = true, ["Taiwan Province"] = true, ["Wei Boyang"] = true, }

-- the labels for each part local labels = { ["c"] = "Chinese", ["s"] = "simplified Chinese", ["t"] = "traditional Chinese", ["p"] = "pinyin", ["tp"] = "Tongyong Pinyin", ["w"] = "Wade–Giles", ["j"] = "Jyutping", ["cy"] = "Cantonese Yale", ["poj"] = "Pe̍h-ōe-jī", ["zhu"] = "Zhuyin Fuhao", ["l"] = "literally", }

-- article titles for wikilinks for each part local wlinks = { ["c"] = "Chinese language", ["s"] = "simplified Chinese characters", ["t"] = "traditional Chinese characters", ["p"] = "pinyin", ["tp"] = "Tongyong Pinyin", ["w"] = "Wade–Giles", ["j"] = "Jyutping", ["cy"] = "Yale romanization of Cantonese", ["poj"] = "Pe̍h-ōe-jī", ["zhu"] = "Bopomofo", }

-- for those parts which are to be treated as languages their ISO code local ISOlang = { ["c"] = "zh", ["t"] = "zh-Hant", ["s"] = "zh-Hans", ["p"] = "zh-Latn-pinyin", ["tp"] = "zh-Latn", ["w"] = "zh-Latn-wadegile", ["j"] = "yue-jyutping", ["cy"] = "yue", ["poj"] = "hak", ["zhu"] = "zh-Bopo", }

local italic = { ["p"] = true, ["tp"] = true, ["w"] = true, ["j"] = true, ["cy"] = true, ["poj"] = true, } -- Categories for different kinds of Chinese text local cats = { ["c"] = "", ["s"] = "", ["t"] = "", }

function p.Zh(frame) -- load arguments module to simplify handling of args local getArgs = require('Module:Arguments').getArgs local args = getArgs(frame) return p._Zh(args) end function p._Zh(args) local uselinks = not (args["links"] == "no") -- whether to add links local uselabels = not (args["labels"] == "no") -- whether to have labels local capfirst = args["scase"] ~= nil

        local t1 = false -- whether traditional Chinese characters go first
        local j1 = false -- whether Cantonese Romanisations go first
        local testChar
        if (args["first"]) then
                 for testChar in mw.ustring.gmatch(args["first"], "%a+") do
          if (testChar == "t") then
           t1 = true
           end
          if (testChar == "j") then
           j1 = true
           end
         end
        end
        if (t1 == false) then
         local title = mw.title.getCurrentTitle()
         t1 = t1st[title.text] == true
        end

-- based on setting/preference specify order local orderlist = {"c", "s", "t", "p", "tp", "w", "j", "cy", "poj", "zhu", "l"} if (t1) then orderlist[2] = "t" orderlist[3] = "s" end if (j1) then orderlist[4] = "j" orderlist[5] = "cy" orderlist[6] = "p" orderlist[7] = "tp" orderlist[8] = "w" end -- rename rules. Rules to change parameters and labels based on other parameters if args["hp"] then -- hp an alias for p ([hanyu] pinyin) args["p"] = args["hp"] end if args["tp"] then -- if also Tongyu pinyin use full name for Hanyu pinyin labels["p"] = "Hanyu Pinyin" end if (args["s"] and args["s"] == args["t"]) then -- Treat simplified + traditional as Chinese if they're the same args["c"] = args["s"] args["s"] = nil args["t"] = nil elseif (not (args["s"] and args["t"])) then -- use short label if only one of simplified and traditional labels["s"] = labels["c"] labels["t"] = labels["c"] end local body = "" -- the output string local params -- for creating HTML spans local label -- the label, i.e. the bit preceeding the supplied text local val -- the supplied text -- go through all possible fields in loop, adding them to the output for i, part in ipairs(orderlist) do if (args[part]) then -- build label label = "" if (uselabels) then label = labels[part] if (capfirst) then label = mw.language.getContentLanguage():ucfirst(

Another common approach that can be used in situations where the test statistics can be standardized to Z-scores is to make a normal quantile plot of the test statistics. If the observed quantiles are markedly more dispersed than the normal quantiles, this suggests that some of the significant results may be true positives.require('Module:No globals')

local p = {}

-- articles in which traditional Chinese preceeds simplified Chinese local t1st = { ["228 Incident"] = true, ["Chinese calendar"] = true, ["Lippo Centre, Hong Kong"] = true, ["Republic of China"] = true, ["Republic of China at the 1924 Summer Olympics"] = true, ["Taiwan"] = true, ["Taiwan (island)"] = true, ["Taiwan Province"] = true, ["Wei Boyang"] = true, }

-- the labels for each part local labels = { ["c"] = "Chinese", ["s"] = "simplified Chinese", ["t"] = "traditional Chinese", ["p"] = "pinyin", ["tp"] = "Tongyong Pinyin", ["w"] = "Wade–Giles", ["j"] = "Jyutping", ["cy"] = "Cantonese Yale", ["poj"] = "Pe̍h-ōe-jī", ["zhu"] = "Zhuyin Fuhao", ["l"] = "literally", }

-- article titles for wikilinks for each part local wlinks = { ["c"] = "Chinese language", ["s"] = "simplified Chinese characters", ["t"] = "traditional Chinese characters", ["p"] = "pinyin", ["tp"] = "Tongyong Pinyin", ["w"] = "Wade–Giles", ["j"] = "Jyutping", ["cy"] = "Yale romanization of Cantonese", ["poj"] = "Pe̍h-ōe-jī", ["zhu"] = "Bopomofo", }

-- for those parts which are to be treated as languages their ISO code local ISOlang = { ["c"] = "zh", ["t"] = "zh-Hant", ["s"] = "zh-Hans", ["p"] = "zh-Latn-pinyin", ["tp"] = "zh-Latn", ["w"] = "zh-Latn-wadegile", ["j"] = "yue-jyutping", ["cy"] = "yue", ["poj"] = "hak", ["zhu"] = "zh-Bopo", }

local italic = { ["p"] = true, ["tp"] = true, ["w"] = true, ["j"] = true, ["cy"] = true, ["poj"] = true, } -- Categories for different kinds of Chinese text local cats = { ["c"] = "", ["s"] = "", ["t"] = "", }

function p.Zh(frame) -- load arguments module to simplify handling of args local getArgs = require('Module:Arguments').getArgs local args = getArgs(frame) return p._Zh(args) end function p._Zh(args) local uselinks = not (args["links"] == "no") -- whether to add links local uselabels = not (args["labels"] == "no") -- whether to have labels local capfirst = args["scase"] ~= nil

        local t1 = false -- whether traditional Chinese characters go first
        local j1 = false -- whether Cantonese Romanisations go first
        local testChar
        if (args["first"]) then
                 for testChar in mw.ustring.gmatch(args["first"], "%a+") do
          if (testChar == "t") then
           t1 = true
           end
          if (testChar == "j") then
           j1 = true
           end
         end
        end
        if (t1 == false) then
         local title = mw.title.getCurrentTitle()
         t1 = t1st[title.text] == true
        end

-- based on setting/preference specify order local orderlist = {"c", "s", "t", "p", "tp", "w", "j", "cy", "poj", "zhu", "l"} if (t1) then orderlist[2] = "t" orderlist[3] = "s" end if (j1) then orderlist[4] = "j" orderlist[5] = "cy" orderlist[6] = "p" orderlist[7] = "tp" orderlist[8] = "w" end -- rename rules. Rules to change parameters and labels based on other parameters if args["hp"] then -- hp an alias for p ([hanyu] pinyin) args["p"] = args["hp"] end if args["tp"] then -- if also Tongyu pinyin use full name for Hanyu pinyin labels["p"] = "Hanyu Pinyin" end if (args["s"] and args["s"] == args["t"]) then -- Treat simplified + traditional as Chinese if they're the same args["c"] = args["s"] args["s"] = nil args["t"] = nil elseif (not (args["s"] and args["t"])) then -- use short label if only one of simplified and traditional labels["s"] = labels["c"] labels["t"] = labels["c"] end local body = "" -- the output string local params -- for creating HTML spans local label -- the label, i.e. the bit preceeding the supplied text local val -- the supplied text -- go through all possible fields in loop, adding them to the output for i, part in ipairs(orderlist) do if (args[part]) then -- build label label = "" if (uselabels) then label = labels[part] if (capfirst) then label = mw.language.getContentLanguage():ucfirst(

See also

Key concepts
General methods of alpha adjustment for multiple comparisons
Related concepts

References


-- Module:Hatnote -- -- -- -- This module produces hatnote links and links to related articles. It -- -- implements the and meta-templates and includes -- -- helper functions for other Lua hatnote modules. --


local libraryUtil = require('libraryUtil') local checkType = libraryUtil.checkType local mArguments -- lazily initialise Module:Arguments local yesno -- lazily initialise Module:Yesno

local p = {}


-- Helper functions


local function getArgs(frame) -- Fetches the arguments from the parent frame. Whitespace is trimmed and -- blanks are removed. mArguments = require('Module:Arguments') return mArguments.getArgs(frame, {parentOnly = true}) end

local function removeInitialColon(s) -- Removes the initial colon from a string, if present. return s:match('^:?(.*)') end

function p.findNamespaceId(link, removeColon) -- Finds the namespace id (namespace number) of a link or a pagename. This -- function will not work if the link is enclosed in double brackets. Colons -- are trimmed from the start of the link by default. To skip colon -- trimming, set the removeColon parameter to true. checkType('findNamespaceId', 1, link, 'string') checkType('findNamespaceId', 2, removeColon, 'boolean', true) if removeColon ~= false then link = removeInitialColon(link) end local namespace = link:match('^(.-):') if namespace then local nsTable = mw.site.namespaces[namespace] if nsTable then return nsTable.id end end return 0 end

function p.formatPages(...) -- Formats a list of pages using formatLink and returns it as an array. Nil -- values are not allowed. local pages = {...} local ret = {} for i, page in ipairs(pages) do ret[i] = p._formatLink(page) end return ret end

function p.formatPageTables(...) -- Takes a list of page/display tables and returns it as a list of -- formatted links. Nil values are not allowed. local pages = {...} local links = {} for i, t in ipairs(pages) do checkType('formatPageTables', i, t, 'table') local link = t[1] local display = t[2] links[i] = p._formatLink(link, display) end return links end

function p.makeWikitextError(msg, helpLink, addTrackingCategory) -- Formats an error message to be returned to wikitext. If -- addTrackingCategory is not false after being returned from -- Module:Yesno, and if we are not on a talk page, a tracking category -- is added. checkType('makeWikitextError', 1, msg, 'string') checkType('makeWikitextError', 2, helpLink, 'string', true) yesno = require('Module:Yesno') local title = mw.title.getCurrentTitle() -- Make the help link text. local helpText if helpLink then helpText = ' (help)' else helpText = end -- Make the category text. local category if not title.isTalkPage and yesno(addTrackingCategory) ~= false then category = 'Hatnote templates with errors' category = string.format( '%s:%s', mw.site.namespaces[14].name, category ) else category = end return string.format( '%s', msg, helpText, category ) end


-- Format link -- -- Makes a wikilink from the given link and display values. Links are escaped -- with colons if necessary, and links to sections are detected and displayed -- with " § " as a separator rather than the standard MediaWiki "#". Used in -- the template.


function p.formatLink(frame) local args = getArgs(frame) local link = args[1] local display = args[2] if not link then return p.makeWikitextError( 'no link specified', 'Template:Format hatnote link#Errors', args.category ) end return p._formatLink(link, display) end

function p._formatLink(link, display) -- Find whether we need to use the colon trick or not. We need to use the -- colon trick for categories and files, as otherwise category links -- categorise the page and file links display the file. checkType('_formatLink', 1, link, 'string') checkType('_formatLink', 2, display, 'string', true) link = removeInitialColon(link) local namespace = p.findNamespaceId(link, false) local colon if namespace == 6 or namespace == 14 then colon = ':' else colon = end -- Find whether a faux display value has been added with the | magic -- word. if not display then local prePipe, postPipe = link:match('^(.-)|(.*)$') link = prePipe or link display = postPipe end -- Find the display value. if not display then local page, section = link:match('^(.-)#(.*)$') if page then display = page .. ' § ' .. section end end -- Assemble the link. if display then return string.format('%s', colon, link, display) else return string.format('%s%s', colon, link) end end


-- Hatnote -- -- Produces standard hatnote text. Implements the template.


function p.hatnote(frame) local args = getArgs(frame) local s = args[1] local options = {} if not s then return p.makeWikitextError( 'no text specified', 'Template:Hatnote#Errors', args.category ) end options.extraclasses = args.extraclasses options.selfref = args.selfref return p._hatnote(s, options) end

function p._hatnote(s, options) checkType('_hatnote', 1, s, 'string') checkType('_hatnote', 2, options, 'table', true) local classes = {'hatnote'} local extraclasses = options.extraclasses local selfref = options.selfref if type(extraclasses) == 'string' then classes[#classes + 1] = extraclasses end if selfref then classes[#classes + 1] = 'selfref' end return string.format( '
%s
', table.concat(classes, ' '), s )

end

return p-------------------------------------------------------------------------------- -- Module:Hatnote -- -- -- -- This module produces hatnote links and links to related articles. It -- -- implements the and meta-templates and includes -- -- helper functions for other Lua hatnote modules. --


local libraryUtil = require('libraryUtil') local checkType = libraryUtil.checkType local mArguments -- lazily initialise Module:Arguments local yesno -- lazily initialise Module:Yesno

local p = {}


-- Helper functions


local function getArgs(frame) -- Fetches the arguments from the parent frame. Whitespace is trimmed and -- blanks are removed. mArguments = require('Module:Arguments') return mArguments.getArgs(frame, {parentOnly = true}) end

local function removeInitialColon(s) -- Removes the initial colon from a string, if present. return s:match('^:?(.*)') end

function p.findNamespaceId(link, removeColon) -- Finds the namespace id (namespace number) of a link or a pagename. This -- function will not work if the link is enclosed in double brackets. Colons -- are trimmed from the start of the link by default. To skip colon -- trimming, set the removeColon parameter to true. checkType('findNamespaceId', 1, link, 'string') checkType('findNamespaceId', 2, removeColon, 'boolean', true) if removeColon ~= false then link = removeInitialColon(link) end local namespace = link:match('^(.-):') if namespace then local nsTable = mw.site.namespaces[namespace] if nsTable then return nsTable.id end end return 0 end

function p.formatPages(...) -- Formats a list of pages using formatLink and returns it as an array. Nil -- values are not allowed. local pages = {...} local ret = {} for i, page in ipairs(pages) do ret[i] = p._formatLink(page) end return ret end

function p.formatPageTables(...) -- Takes a list of page/display tables and returns it as a list of -- formatted links. Nil values are not allowed. local pages = {...} local links = {} for i, t in ipairs(pages) do checkType('formatPageTables', i, t, 'table') local link = t[1] local display = t[2] links[i] = p._formatLink(link, display) end return links end

function p.makeWikitextError(msg, helpLink, addTrackingCategory) -- Formats an error message to be returned to wikitext. If -- addTrackingCategory is not false after being returned from -- Module:Yesno, and if we are not on a talk page, a tracking category -- is added. checkType('makeWikitextError', 1, msg, 'string') checkType('makeWikitextError', 2, helpLink, 'string', true) yesno = require('Module:Yesno') local title = mw.title.getCurrentTitle() -- Make the help link text. local helpText if helpLink then helpText = ' (help)' else helpText = end -- Make the category text. local category if not title.isTalkPage and yesno(addTrackingCategory) ~= false then category = 'Hatnote templates with errors' category = string.format( '%s:%s', mw.site.namespaces[14].name, category ) else category = end return string.format( '%s', msg, helpText, category ) end


-- Format link -- -- Makes a wikilink from the given link and display values. Links are escaped -- with colons if necessary, and links to sections are detected and displayed -- with " § " as a separator rather than the standard MediaWiki "#". Used in -- the template.


function p.formatLink(frame) local args = getArgs(frame) local link = args[1] local display = args[2] if not link then return p.makeWikitextError( 'no link specified', 'Template:Format hatnote link#Errors', args.category ) end return p._formatLink(link, display) end

function p._formatLink(link, display) -- Find whether we need to use the colon trick or not. We need to use the -- colon trick for categories and files, as otherwise category links -- categorise the page and file links display the file. checkType('_formatLink', 1, link, 'string') checkType('_formatLink', 2, display, 'string', true) link = removeInitialColon(link) local namespace = p.findNamespaceId(link, false) local colon if namespace == 6 or namespace == 14 then colon = ':' else colon = end -- Find whether a faux display value has been added with the | magic -- word. if not display then local prePipe, postPipe = link:match('^(.-)|(.*)$') link = prePipe or link display = postPipe end -- Find the display value. if not display then local page, section = link:match('^(.-)#(.*)$') if page then display = page .. ' § ' .. section end end -- Assemble the link. if display then return string.format('%s', colon, link, display) else return string.format('%s%s', colon, link) end end


-- Hatnote -- -- Produces standard hatnote text. Implements the template.


function p.hatnote(frame) local args = getArgs(frame) local s = args[1] local options = {} if not s then return p.makeWikitextError( 'no text specified', 'Template:Hatnote#Errors', args.category ) end options.extraclasses = args.extraclasses options.selfref = args.selfref return p._hatnote(s, options) end

function p._hatnote(s, options) checkType('_hatnote', 1, s, 'string') checkType('_hatnote', 2, options, 'table', true) local classes = {'hatnote'} local extraclasses = options.extraclasses local selfref = options.selfref if type(extraclasses) == 'string' then classes[#classes + 1] = extraclasses end if selfref then classes[#classes + 1] = 'selfref' end return string.format( '
%s
', table.concat(classes, ' '), s )

end

return p
  1. ^
  2. ^
  3. ^
  4. ^
  5. ^
  6. ^
  7. ^ Howell (2002, Chapter 12: Multiple comparisons among treatment means)
  8. ^
  9. ^
  10. ^
  11. ^
  12. ^
  13. ^

Further reading

  • F. Betz, T. Hothorn, P. Westfall (2010), Multiple Comparisons Using R, CRC Press
  • S. Dudoit and M. J. van der Laan (2008), Multiple Testing Procedures with Application to Genomics, Springer
  • B. Phipson and G. K. Smyth (2010), Permutation P-values Should Never Be Zero: Calculating Exact P-values when Permutations are Randomly Drawn, Statistical Applications in Genetics and Molecular Biology Vol.. 9 Iss. 1, Article 39,
  • P. H. Westfall and S. S. Young (1993), Resampling-based Multiple Testing: Examples and Methods for p-Value Adjustment, Wiley
  • P. Westfall, R. Tobias, R. Wolfinger (2011) Multiple comparisons and multiple testing using SAS, 2nd edn, SAS Institute
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 



Copyright © World Library Foundation. All rights reserved. eBooks from World eBook Library are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.