NMST539 | Lab Session 6Principal Component Analysis(applications and examples in R))Summer Term 2021/2022 | 29/03/22source file (UTF8 encoding)The R software is available for download (free of charge) from the website: https://www.r-project.org A user-friendly interface (one of many): RStudio. Manuals and introduction into R (in Czech or English):
1. Principal component analysis / theoretical aspectsThe principal component analysis (PCA) - is a multivariate exploratory method based on the variance-covariance matrix of some random vector \(\boldsymbol{X} \in \mathbb{R}^{p}\). Of course, instead of working with the (theoretical) vector itself and the (usually uknown) variance-convariance matrix we use a random sample drawn from the same distribution and the sample version of the variance covariance matrix instead. The main idea is to consider \(\ell < p\) principal components instead of the whole \(p \in \mathbb{N}\) dimensional random vector (dimensionality reduction). In other words, we look for a new \(\ell\)-dimensional random representation of the original \(p\)-dimensional random vector. The principal components are orthogonal linear combinations of the original covariates, and, in addition, they are ordered with respect to a decreasing variance. In most cases \(\ell \ll p\) (a significant reduction of dimensionality). In general, the principal components can be seen from (at least four) different perspectives:
Statistically speaking, the principal components are linear combinations of the original covariates, such that these new covariates are mutually uncorrelated, with the zero mean value and the variace-covariance matrix which is diagonal, with the eigen values of the (theoretical/empirical) variance-covariance matrix on its diagonal. Formally speaking, PCA can be seen as a maximization task, where the first principal component \(\boldsymbol{\delta}_1^\top \boldsymbol{X}\) (i.e., a linear combination of the elements of the vector \(\boldsymbol{X} \in \mathbb{R}^{p}\)), defined by the vector of unknown parameters \(\boldsymbol{\delta}_1 \in \mathbb{R}^p\), is obtained as a solution of the maximization problem \[ \boldsymbol{\delta}_1 = Argmax_{\boldsymbol{\{\delta};~\|\boldsymbol{\delta}\| = 1 \}} ~~Var (\boldsymbol{\delta}^\top \boldsymbol{X}). \] The second principal component \(\boldsymbol{\delta}_2^\top \boldsymbol{X}\) (again given as a linear combination of the vector elements) is defined as an analogous maximization task, however, using an additional constraint regarding orthogonality – meaning that \(\boldsymbol{\delta}_1^\top\boldsymbol{\delta}_2 = 0\), or, alternatively, \(\boldsymbol{\delta}_1^\top \boldsymbol{X}\) and \(\boldsymbol{\delta}_2^\top \boldsymbol{X}\) are uncorrelated random variables. Thus, it holds, that \[ Cov(\boldsymbol{\delta}_1^\top \boldsymbol{X}, \boldsymbol{\delta}_2^\top \boldsymbol{X}) = 0. \] The principal components can be, howeever, also seen from a geometrical perspective of some specific projections into well defined linear subspaces – such as those used for the linear regression framework o the totally least squares - TLS. 2. Principal components, orthogonal projections, and regressionLet \(\boldsymbol{Y}\) be some random vector of the length \(n \in \mathbb{N}\) and, in addition, we also have some data matrix \(\mathbb{X}\) of the type \(n \times p\), where, in general, \(p \in \mathbb{N}\) is considered to be less than \(n\). A common task in regression is to use the information obtained in \(\mathbb{X}\) and to approximate the \(n\) dimensional random vector \(\boldsymbol{Y}\) in a lower dimensional space \(\mathcal{L}(\mathbb{X})\) – a \(p\) dimensional space generated by the columns of the matrix \(\mathbb{X}\). If we denote the orthogonal projection of \(\boldsymbol{Y}\) into \(\mathcal{L}(\mathbb{X})\) as \(P_{\mathbb{X}}(\boldsymbol{Y})\) then the linear regression model for regessing \(\boldsymbol{Y}\) on \(\mathbb{X}\) in the form \(P_{\mathbb{X}}(\boldsymbol{Y}) = a_1 \boldsymbol{X}_1 + \dots + a_p \boldsymbol{X}_p\) gives the minumum error \[ \| \boldsymbol{Y} - P_{\mathbb{X}}(\boldsymbol{Y}) \|_2^2. \] The corresponding projection matrix (which needs to be idempotent) is \[ \mathbb{H} = \mathbb{X}(\mathbb{X}^\top \mathbb{X})^{-1}\mathbb{X}^\top = \mathbb{Q}\mathbb{Q}^\top, \] where \(\mathbb{X}\) contains the basis vectors of \(\mathcal{L}(\mathbb{X})\) in its columnts, or, respectively, the colums of \(\mathbb{Q}\) give the orthogonal basis of the same linear space. Obviously, it also holds, that \[ P_{\mathbb{X}}(\boldsymbol{Y}) = \mathbb{H}\boldsymbol{Y} = \mathbb{X}(\mathbb{X}^\top \mathbb{X})^{-1}\mathbb{X}^\top \boldsymbol{Y}. \] This concept can be easily generalized for any orthogonal projection of \(\boldsymbol{Y}\) into a linear span of the columns of some general \(n \times p\) matrix \(\mathbb{B}\), such that \(\mathbb{B}^\top\mathbb{B} = \mathbb{I}\). Let this projection be denoted as \(P_{\mathbb{B}}(\boldsymbol{Y})\). Then it holds that \(P_{\mathbb{B}}(\boldsymbol{Y}) = \mathbb{B}\mathbb{B}^\top\boldsymbol{Y}\) and, moreover, it can be shown that \[ \|\boldsymbol{Y} - P_{\mathbb{\Gamma}}(\boldsymbol{Y})\|_2^2 \leq \|\boldsymbol{Y} - P_\mathbb{B}(\boldsymbol{Y})\|_2^2 \] where \(\Gamma\) is the \(n\times p\) matrix with the first eigen vectors of the variance covariance matrix \(\Sigma\) (see Eckart and Young (1936) and Mirsky (1960) approximation). The following example shows three specific orthogonal projections of the random vector \(\boldsymbol{Y}\) into a linear subspace being generated by the columns of some (well-defined) matrix of type \(n \times p\). The same data – random vector \(\boldsymbol{Y}\) – is used for all projections but the linear subspace where \(\boldsymbol{Y}\) is projected into, is different in each case.
Alternatively, instead of finding the spectral decomposition of the (theoretical/empirical) variance-covariance matrix \(\Sigma\) we can use an idea of a standard linear regression approach and we can alternate between two different regression problems. For some data \(\boldsymbol{Y}_1, \dots, \boldsymbol{Y_n} \in \mathbb{R}^p\) we have \[
\sum_{i = 1}^n \|\boldsymbol{Y}_i - P_\mathbb{B}(\boldsymbol{Y}_i)\|_2^2 = \sum_{i = 1}^n \|\boldsymbol{Y}_i - \mathbb{B}\mathbb{B}^\top\boldsymbol{Y}_i\|_2^2 = \sum_{i = 1}^n \|\boldsymbol{Y}_i - \mathbb{B} \boldsymbol{v}_i\|_2^2 = \sum_{i = 1}^{n}\sum_{j = 1}^{p} (Y_{i j} - \boldsymbol{b}_j\boldsymbol{v}_{i})^2,
\] where \(\boldsymbol{v}_i = \mathbb{B}^\top\boldsymbol{Y}_i\), for \(i = 1, \dots, n\), and \(\boldsymbol{b}_j\) is the corresponding row of \(\mathbb{B}\). However, neither \(\mathbb{B}\) nor the vectors \(\boldsymbol{v}_i\) are known. Therefore, we need to minimize the term on the right hand side with respect to some orthogonal matrix \(\mathbb{B}\), such that \(\mathbb{B}^\top\mathbb{B} = \mathbb{I}\) and some vectors \(\boldsymbol{v}_1, \dots, \boldsymbol{v}_n\). This can be easily done by an alternating regression approach. The expression \[
\sum_{i = 1}^{n} [Y_{i j} - \boldsymbol{b}_j^\top\boldsymbol{v}_i]^2
\] is minimized with respect to \(\mathbb{B}\) (in the first step), so it holds that \(\mathbb{B}^\top\mathbb{B} = \mathbb{I}\) and it is also minimized with respect to \(\boldsymbol{v}_i\) (in the second step). Both steps are alternated until a sufficient precision (convergence) is reached. The main idea is to find the best orthogonal projection into a linear subspace generated by the orthogonal matrix \(\mathbb{B}\) (matrix columnts), such that the sum of squared errors is as small as possible. From the linear regression theory it is known that under the additional assumption of linearity the best estimated is given within the linear regression framework. However, from a general point of view (without using the linearity restriction) it is possible to obtain a better (orthogonal) projection. If the matrix \(\mathbb{B}\) contains the first \(p \in \mathbb{N}\) principal components (in columns) it can be used to define the best orthogonal projection of the vector \(\boldsymbol{Y}\) into a \(p\)-dimensional linear subspace (considering of course the least squares criterion). For a brief illustration see the next example (by Matías Salibián-Barrera).
And we can obtain the first principal component by adopting the alternating regression approach:
which can be directly compared with the eigen vector itself (obtained by the singular value decomposition or, equivalently, by the eigen value decomposition instead):
For some practical applications it can be useful to investigate the empirical performance of both approaches. This can be effectively done, for instance, by comparing the computational times needed for both approaches (for this purpose we consider a higher dimensional example).
3. Principal component analysis in RThe principal component analysis performed by the R command Let the eigen value decomposition (the singular value decomposition respectively) is of the form \[ EIV(\mathcal{S}) = \Gamma \Lambda \Gamma^\top \quad \quad \textrm{respectively} \quad \quad SVD(\mathcal{X}) = UDV^\top. \] Then we easily obtain, that both methods are equivalent as we have \[
n \mathcal{S} = \mathcal{X}^\top\mathcal{X} = VDU^\top UD V^\top = V D^2 V^\top = \Gamma \widetilde{\Lambda} \Gamma^\top = EIV(n\mathcal{S}),
\] for \(\Gamma \equiv V\) and \(\widetilde{\Lambda} \equiv D^2\). Moreover, it holds that \[EIV(n\mathcal{S}) = \Gamma \widetilde{\Lambda} \Gamma^\top = \Gamma (n \Lambda) \Gamma^\top = n \Gamma \Lambda \Gamma^\top = n EIV(\mathcal{S}).\] Thus, we multiply the square matrix by some constant, then the eigen vectors remain the same, just the eigen values multiply correspondingly. Individual tasksThe principal component analysis relies on the eigen vector decomposition of the sample variance-covariance matrix. Alternatively, one can use a singular value decomposition of the data matrix itself. However, in both situations it is crucial to understand the role of the eigen vectors and the eigen values. See the following examples to get some useful insight.
For illustration purposes we use the data sample which represents 65 river localities in the Czech Republic. For each locality there are 17 biological metrics recorded. Each biological metric is used as a qualitative/quantitative status of some biological activity (for instance, phytobenthos, macrobenthos, fishes) within each locality expressed in terms of some well defined and internationally recognized metrics - indexes). The idea was to compare the quality of the fresh water environments across all European union member states. The data file can be obtained from the following address:
One can use a fancy way to visualize the correlation structure within the data (using the library ‘corrplot’ which needs to be installed by
Another nice alternative can be obtained using the
princomp() vs. prcomp()There are various libraries PCA available for the R environment: standard commands are We can compare two results obtained by the two commands mentioned above:
The results are, however, not invariant with respect to transformations.
Compare the standard deviation values with the square roots of the eigen values of the sample variance covariance matrix.
Can you explain why the Standard deviation values provided for two procedures ( Dimension (information) reductionA simple (visual example) on how the PC analysis actually works (and how it reduces dimensions/information) can be taken by using the following illustration.
5. Graphical representations of PCsFirst of all, it is usefull to know the amount of variability (the proportion of the total variability) which is explained by some first principal components.
Individual tasks
There are also other options to visualize the results. A standard graphical tool is available through the commnad
A more fancy version can by obtained by installing the
Another type of plot can be obtained using the library ‘ggfortify’ (use
Last but not least, there is also a complex command
Homework Assignment(Deadline: 12.04.2022)Use the command below to instal the
Chose one dataset of your choise from the list of all available datasets in the package:
There are 21 different datasets and you can load each of them by typing and running
|