In statistics, the Jonckheere trend test^{[1]} (sometimes called the Jonckheere–Terpstra^{[2]} test) is a test for an ordered alternative hypothesis within an independent samples (betweenparticipants) design. It is similar to the Kruskal–Wallis test in that the null hypothesis is that several independent samples are from the same population. However, with the Kruskal–Wallis test there is no a priori ordering of the populations from which the samples are drawn. When there is an a priori ordering, the Jonckheere test has more statistical power than the Kruskal–Wallis test.
The null and alternative hypotheses can be conveniently expressed in terms of population medians for k populations (where k > 2). Letting θ_{i} be the population median for the ith population, the null hypothesis is:

H_0: \theta_1 = \theta_2 = \cdots = \theta_k
The alternative hypothesis is that the population medians have an a priori ordering e.g.:

H_A: \theta_1 ≤ \theta_2 ≤ \cdots ≤ \theta_k
with at least one strict inequality.
Procedure
The test can be seen as a special case of Maurice Kendall’s more general method of rank correlation^{[3]} and makes use of the Kendall’s S statistic. This can be computed in one of two ways:
The ‘direct counting’ method

Arrange the samples in the predicted order

For each score in turn, count how many scores in the samples to the right are larger than the score in question. This is P.

For each score in turn, count how many scores in the samples to the right are smaller than the score in question. This is Q.

S = P – Q
The ‘nautical’ method

Cast the data into an ordered contingency table, with the levels of the independent variable increasing from left to right, and values of the dependent variable increasing from top to bottom.

For each entry in the table, count all other entries that lie to the ‘South East’ of the particular entry. This is P.

For each entry in the table, count all other entries that lie to the ‘South West’ of the particular entry. This is Q.

S = P – Q
Note that there will always be ties in the independent variable (individuals are ‘tied’ in the sense that they are in the same group) but there may or may not be ties in the dependent variable. If there are no ties – or the ties occur within a particular sample (which does not effect the value of the test statistic) – exact tables of S are available; for example, Jonckheere^{[1]} provided selected tables for values of k from 3 to 6 and equal samples sizes (m) from 2 to 5. Leach^{[4]} presented critical values of S for k = 3 with sample sizes ranging from 2,2,1 to 5,5,5.
Normal approximation to S
The standard normal distribution can be used to approximate the distribution of S under the null hypothesis for cases in which exact tables are not available. The mean of the distribution of S will always be zero, and assuming that there are no ties scores between the values in two (or more) different samples the variance is given by

\operatorname{VAR}(S)=\frac{2(n^3\sum t^3_i)+3(n^2\sum t^2_i)}{18}
Where n is the total number of scores, and t_{i} is the number of scores in the ith sample. The approximation to the standard normal distribution can be improved by the use of a continuity correction: S_{c} = S – 1. Thus 1 is subtracted from a positive S value and 1 is added to a negative S value. The zscore equivalent is then given by

z =\frac{S_c}{\sqrt{\operatorname{VAR}(S)}}
Ties
If scores are tied between the values in two (or more) different samples there are no exact table for the S distribution and an approximation to the normal distribution has to be used. In this case no continuity correction is applied to the value of S and the variance is given by

\begin{align}\operatorname{VAR}(S)=&\frac{2\left(n^3\sum t^3_i \sum u^3_i\right)+3\left(n^2\sum t^2_i \sum u^2_i\right)+5n}{18} \\ &{}+\frac{\left(\sum t^3_i3\sum t^2_i+2n\right)\left(\sum u^3_i3\sum u^2_i+2n\right)}{9n(n1)(n2)} \\ &{}+\frac{\left(\sum t^2_in\right)\left(\sum u^2_in\right)}{2n(n1)}\end{align}
where t_{i} is a row marginal total and u_{i} a column marginal total in the contingency table. The zscore equivalent is then given by

z =\frac{S}{\sqrt{\operatorname{VAR}(S)}}
A numerical example
In a partial replication of a study by Loftus and Palmer^{[5]} participants were assigned at random to one of three groups, and then shown a film of two cars crashing into each other. After viewing the film, the participants in one group were asked the following question: “About how fast were the cars going when they contacted each other?” Participants in a second group were asked, “About how fast were the cars going when they bumped into each other?” Participants in the third group were asked, “About how fast were the cars going when they smashed into each other?” Loftus and Palmer predicted that the action verb used (contacted, bumped, smashed) would influence the speed estimates in miles per hour (mph) such that action verbs implying greater energy would lead to higher estimated speeds. The following results were obtained (simulated data):

Contacted

Bumped

Smashed

10

12

20

12

18

25

14

20

27

16

22

30

mdn = 13

mdn = 19

mdn = 26

The ‘direct counting’ method

The samples are already in the predicted order

For each score in turn, count how many scores in the samples to the right are larger than the score in question to obtain P:


P = 8 + 7 + 7 + 7 + 4 + 4 + 3 + 3 = 43

For each score in turn, count how many scores in the samples to the right are smaller than the score in question to obtain Q:


Q = 0 + 0 + 1 + 1 + 0 + 0 + 0 + 1 = 3

S = P  Q = 43  3

S = 40
The 'nautical' method

Cast the data into an ordered contingency table

mph

Contacted

Bumped

Smashed

Totals (t_{i})

10

1

0

0

1

12

1

1

0

2

14

1

0

0

1

16

1

0

0

1

18

0

1

0

1

20

0

1

1

2

22

0

1

0

1

25

0

0

1

1

27

0

0

1

1

30

0

0

1

1

Totals (u_{i})

4

4

4

12


For each entry in the table, count all other entries that lie to the 'South East' of the particular entry. This is P:

P = (1 × 8) + (1 × 7) + (1 × 7) + (1 × 7) + (1 × 4) + (1 × 4) + (1 × 3) + ( 1 × 3) = 43

For each entry in the table, count all other entries that lie to the 'South West' of the particular entry. This is Q:

Q = (1 × 2) + (1 × 1) = 3

S = P − Q = 43 − 3

S = 40
Using exact tables
When the ties between samples are few (as in this example) Leach^{[4]} suggested that ignoring the ties and using exact tables would provide a reasonably accurate result. Jonckheere^{[1]} suggested breaking the ties against the alternative hypothesis and then using exact tables. In the current example where tied scores only appear in adjacent groups, the value of S is unchanged if the ties are broken against the alternative hypothesis. This may be verified by substituting 11 mph in place of 12 mph in the Bumped sample, and 19 mph in place of 20 mph in the Smashed and recomputing the test statistic. From tables with k = 3, and m = 4, the critical S value for α = 0.05 is 36 and thus the result would be declared statistically significant at this level.
Computing a standard normal approximation

\text{As } n = 12\text{, }n^2=144 \text{ and } n^3 = 1728. \text{ Also}

t^2_i = 16

t^3_i = 24

u^2_i = 48

u^3_i = 192
The variance of S is then

\begin{align}\operatorname{VAR}(S)=&\frac{2(1728  24  192)+3(144  16  48)+ 60}{18} \\ &+\frac{(24  48 + 24)(192  144 + 24)}{9 \times 12 \times 11 \times 10} \\ &+\frac{(16  12)(48  12)}{2 \times 12 \times 11} \\ &= 185.212\end{align}
And z is given by

z =\frac{S}{\sqrt{\operatorname{VAR}(S)}}=\frac{40}{\sqrt{185.212}} = 2.939
For α = 0.05 (onesided) the critical z value is 1.645, so again the result would be declared significant at this level. A similar test for trend within the context of repeated measures (withinparticipants) designs and based on Spearman's rank correlation coefficient was developed by Page.^{[6]}
References

^ ^{a} ^{b} ^{c} Jonckheere, A. R. (1954). “A distributionfree ksample test against ordered alternatives”. Biometrika, 41: 133–145. http://dx.doi.org/10.2307/2333011

^ Terpstra, T. J. (1952). “The asymptotic normality and consistency of Kendall's test against trend, when ties are present in one ranking”. Indagationes Mathematicae, 14: 327–333. http://oai.cwi.nl/oai/asset/8258/8258A.pdf

^ Kendall, M.G. (1962). “Rank correlation methods” (3rd Ed.). London: Charles Griffin.

^ ^{a} ^{b} Leach, C. (1979). “Introduction to statistics: A nonparametric approach for the social sciences”. Chichester: John Wiley.

^ Loftus, E.F.; Palmer, J.C. (1974). “Reconstruction of automobile destruction: An example of the interaction between language and memory”. Journal of Verbal Learning and Verbal Behavior, 13: 585589. http://dx.doi.org/10.1016/S00225371(74)800113

^ Page, E. B. (1963). "Ordered hypotheses for multiple treatments: A significance test for linear ranks". Journal of the American Statistical Association 58 (301): 216–30. http://dx.doi.org/10.2307/2282965
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.