World Library  
Flag as Inappropriate
Email this Article

Partition of sums of squares

Article Id: WHEBN0003446092
Reproduction Date:

Title: Partition of sums of squares  
Author: World Heritage Encyclopedia
Language: English
Subject: Regression analysis, List of statistics articles, Analysis of variance, Polynomial and rational function modeling, Errors and residuals
Collection: Analysis of Variance, Least Squares, Regression Analysis
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Partition of sums of squares

The partition of sums of squares is a concept that permeates much of inferential statistics and descriptive statistics. More properly, it is the partitioning of sums of squared deviations or errors. Mathematically, the sum of squared deviations is an unscaled, or unadjusted measure of dispersion (also called variability). When scaled for the number of degrees of freedom, it estimates the variance, or spread of the observations about their mean value. Partitioning of the sum of squared deviations into various components allows the overall variability in a dataset to be ascribed to different types or sources of variability, with the relative importance of each being quantified by the size of each component of the overall sum of squares.

Contents

  • Background 1
  • Partitioning the sum of squares in linear regression 2
    • Proof 2.1
    • Further partitioning 2.2
  • See also 3
  • References 4

Background

The distance from any point in a collection of data, to the mean of the data, is the deviation. This can be written as y_i - \overline{y}, where y_i is the ith data point, and \overline{y} is the estimate of the mean. If all such deviations are squared, then summed, as in \sum_{i=1}^n\left(y_i-\overline{y}\,\right)^2, this gives the "sum of squares" for these data.

When more data are added to the collection the sum of squares will increase, except in unlikely cases such as the new data being equal to the mean. So usually, the sum of squares will grow with the size of the data collection. That is a manifestation of the fact that it is unscaled.

In many cases, the number of degrees of freedom is simply the number of data in the collection, minus one. We write this as n − 1, where n is the number of data.

Scaling (also known as normalizing) means adjusting the sum of squares so that it does not grow as the size of the data collection grows. This is important when we want to compare samples of different sizes, such as a sample of 100 people compared to a sample of 20 people. If the sum of squares was not normalized, its value would always be larger for the sample of 100 people than for the sample of 20 people. To scale the sum of squares, we divide it by the degrees of freedom, i.e., calculate the sum of squares per degree of freedom, or variance. Standard deviation, in turn, is the square root of the variance.

The above information is how sum of squares is used in descriptive statistics; see the article on total sum of squares for an application of this broad principle to inferential statistics.

Partitioning the sum of squares in linear regression

Theorem. Given a linear regression model y_i = \beta_0 + \beta_1 x_{i1} + \cdots + \beta_p x_{ip} + \varepsilon_i including a constant based on a sample (y_i, x_{i1}, \ldots, x_{ip}), \, i = 1, \ldots, n containing n observations, the Total Sum of Squares TSS = \sum_{i = 1}^n (y_i - \bar{y})^2 can be partitioned as follows into the explained sum of squares (ESS) and the residual sum of squares (RSS):

\mathrm{TSS} = \mathrm{ESS} + \mathrm{RSS},

where this equation is equivalent to each of the following forms:

\begin{align} \left\| y - \bar{y} \mathbf{1} \right\|^2 &= \left\| \hat{y} - \bar{y} \mathbf{1} \right\|^2 + \left\| \hat{\varepsilon} \right\|^2, \quad \mathbf{1} = (1, 1, \ldots, 1)^T ,\\ \sum_{i = 1}^n (y_i - \bar{y})^2 &= \sum_{i = 1}^n (\hat{y}_i - \bar{y})^2 + \sum_{i = 1}^n (y_i - \hat{y}_i)^2 ,\\ \sum_{i = 1}^n (y_i - \bar{y})^2 &= \sum_{i = 1}^n (\hat{y}_i - \bar{y})^2 + \sum_{i = 1}^n \hat{\varepsilon}_i^2 .\\ \end{align}

Proof

\begin{align} \sum_{i = 1}^n (y_i - \overline{y})^2 &= \sum_{i = 1}^n (y_i - \overline{y} + \hat{y}_i - \hat{y}_i)^2 = \sum_{i = 1}^n ((\hat{y}_i - \bar{y}) + \underbrace{(y_i - \hat{y}_i)}_{\hat{\varepsilon}_i})^2 \\ &= \sum_{i = 1}^n ((\hat{y}_i - \bar{y})^2 + 2 \hat{\varepsilon}_i (\hat{y}_i - \bar{y}) + \hat{\varepsilon}_i^2) \\ &= \sum_{i = 1}^n (\hat{y}_i - \bar{y})^2 + \sum_{i = 1}^n \hat{\varepsilon}_i^2 + 2 \sum_{i = 1}^n \hat{\varepsilon}_i (\hat{y}_i - \bar{y}) \\ &= \sum_{i = 1}^n (\hat{y}_i - \bar{y})^2 + \sum_{i = 1}^n \hat{\varepsilon}_i^2 + 2 \sum_{i = 1}^n \hat{\varepsilon}_i(\hat{\beta}_0 + \hat{\beta}_1 x_{i1} + \cdots + \hat{\beta}_p x_{ip} - \overline{y}) \\ &= \sum_{i = 1}^n (\hat{y}_i - \bar{y})^2 + \sum_{i = 1}^n \hat{\varepsilon}_i^2 + 2 (\hat{\beta}_0 - \overline{y}) \underbrace{\sum_{i = 1}^n \hat{\varepsilon}_i}_0 + 2 \hat{\beta}_1 \underbrace{\sum_{i = 1}^n \hat{\varepsilon}_i x_{i1}}_0 + \cdots + 2 \hat{\beta}_p \underbrace{\sum_{i = 1}^n \hat{\varepsilon}_i x_{ip}}_0 \\ &= \sum_{i = 1}^n (\hat{y}_i - \bar{y})^2 + \sum_{i = 1}^n \hat{\varepsilon}_i^2 = \mathrm{ESS} + \mathrm{RSS} \\ \end{align}

The requirement that the model includes a constant or equivalently that the design matrix contains a column of ones ensures that \sum_{i = 1}^n \hat{\varepsilon}_i = 0 .

Some readers may find the following version of the proof, set in vector form, more enlightening:

\begin{align} SS_} = \Vert - \bar {\mathbf{y}}} \Vert^2 & = \Vert - \bar {\mathbf{y}} + {\mathbf{\hat y}} - {\mathbf{\hat y}}} \Vert^2 , \\ & = \Vert {\left( - \bar {\mathbf{y}}} \right) + \left( - {\mathbf{\hat y}}} \right)} \Vert^2 , \\ & = \Vert - \bar {\mathbf{y}}} \Vert^2 + \Vert{\hat \varepsilon }\Vert^2 + 2{\hat \varepsilon }^T \left( - \bar {\mathbf{y}}} \right) , \\ & = SS_} + SS_} + 2{\hat \varepsilon }^T \left( {X{\hat \beta } - \bar {\mathbf{y}}} \right) ,\\ & = SS_} + SS_} + 2\left( {\hat \varepsilon ^T X} \right){\hat \beta - }2 {\hat \varepsilon }^T {\bar {\mathbf{y}}} , \\ & = SS_} + SS_} .\\ \end{align}

The elimination of terms in the last line, used the fact that

\hat \varepsilon ^T X = \left( {\mathbf{y}} - {\mathbf{\hat y}} \right)^T X = {\mathbf{y}}^T\left( {I - X\left( {X^T X} \right)^{ - 1} X^T } \right)X = {\mathbf{y}}^T\left(X-X \right)={\mathbf{0}}.

Further partitioning

Note that the residual sum of squares can be further partitioned as the lack-of-fit sum of squares plus the sum of squares due to pure error.

See also

References

  • Pre-publication chapters are available on-line.  
  • Christensen, Ronald (2002). Plane Answers to Complex Questions: The Theory of Linear Models (Third ed.). New York: Springer.  
  • Republished as: Whittle, P. (1983). Prediction and Regulation by Linear Least-Square Methods. University of Minnesota Press.  
     
  • Whittle, P. (20 April 2000). Probability Via Expectation (4th ed.). Springer.  
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 



Copyright © World Library Foundation. All rights reserved. eBooks from World eBook Library are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.