The Azimuth Project
Principal component analysis

Contents

Idea

According to Wikipedia:

Principal component analysis (PCA) is a mathematical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of uncorrelated variables called principal components. The number of principal components is less than or equal to the number of original variables. This transformation is defined in such a way that the first principal component has as high a variance as possible (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it be orthogonal to (uncorrelated with) the preceding components. Principal components are guaranteed to be independent only if the data set is jointly normally distributed. PCA is sensitive to the relative scaling of the original variables.

PCA has a variety of alternative names:

  • discrete Karhunen–Loève transform (KLT),
  • Hotelling transform,
  • proper orthogonal decomposition (POD).

PCA with nn basis vectors can also be viewed as a form of “compression”/data reduction, namely the selection of an nn dimensional subspace SS such that projection of the dataset vectors to SS has the minimal summed L 2L_2 error relative to the original dataset. (This L 2L_2 minimisation interpretation is one “explanation” of the dependence of the PCA decomposition on variable scaling.)

PCA is also often used for visualising “dominant” effects in a data-set and for creating predictive models (reducing the dimensionality of the data set for either computational or machine learning/statistical regularisation considerations).

One obvious point is that PCA is only effective at finding structure which is approximately orthogonal vectors, so that other preprocessing such as detrending may be required for PCA to be informative.

Centering

The orthogonal transformation used by PCA is implicitly assuming that the dataset is generated by process with a zero “expected value”, so that generally a “mean vector” is subtracted from all the data points or, equivalently, a constant mean vector is added to all reconstructions. There are two approaches to acheiving this:

  1. Using additional knowledge about the process generating the dataset to deteremine the mean, e.g., that the mean should actually be exactly zero.

  2. Using the computed average of the data set as the “mean vector”.

The compression viewpoint is relatively insensitive to the choice of mean, but techniques which use the vector of PCA components as random variates can be significantly affected by the mean vector.

Basic mathematical formulation

Given an mm-dimensional dataset {y i} i=0 K\{y^{i}\}_{i=0}^K the nn dimensional PCA is a set of nn orthonormal mm-dimensional vectors {v j} j=1 n\{\mathbf{v}^j\}_{j=1}^n along with an “artificial” mean vector v 0\mathbf{v}^0 (often referred to as μ\mu) along with a set of nn-dimensional “coefficient” vectors {λ i} i=0 K\{\lambda^{i}\}_{i=0}^K which can be used to represent approximations y^ i\hat{y}^i to the original y iy^i using

y^ i=μ+ j=1 nλ j iv j= j=0 nλ j iv j \hat{y}^i=\mu+\sum_{j=1}^n \lambda^i_j \mathbf{v}^j=\sum_{j=0}^n \lambda^i_j \mathbf{v}^j

where the second expression absorbs the mean vector by using adding a coefficient λ 0 i\lambda^i_0 which is always 11. The crucial property is that, other than the mean vector, all the v\mathbf{v}s are orthonormal, i.e., v iv j=δ ij\mathbf{v}^i \cdot \mathbf{v}^j=\delta_{i j}. In addition to optimising the “reconstruction error” for a given number of PCA basis vectors, this orthogonality gives rise to other useful calculational properties of PCA.

In the above formulation the λ j i\lambda^i_js may naturally have very different magnitudes, so in some applications there is a further decomposition as

y^ i=μ+ j=1 nw j i(ν jv j)=μ+ j=1 nw j iv j \hat{y}^i=\mu+\sum_{j=1}^n w^i_j (\nu^j \mathbf{v}^j)=\mu+\sum_{j=1}^n w^i_j {\mathbf{v}'}^j

(Again the mean μ\mu can be absorbed into an “artificial” principal component if desired.) Here the ν j\nu^j value is the “length scale” of the v j\mathbf{v}^j principal component, so that it can generally be taken as a reasonable assumption that the loadings w j iw^i_j come from the same probability distribution, often further assumed to be the N(0,1)N(0,1) normal distribution. (Technically the optimal v j\mathbf{v}^j vectors turn out to be the eigenvectors of covariance matrix of the sample set, and the ν j\nu^js are the square roots of the corresponding eigenvalues.)

Sparse PCA

Sparse PCA attempts to create a new set of orthogonal vectors which balance the fidelity of the reconstruction with sparseness of the vectors.

References