A range of successful techniques in computer vision, such as Eigenfaces and Fisherfaces are based on using the spectral decomposition of the empirical covariance matrices that are constructed from given data. These matrices are typically constructed in a setting where the dimension of the data (number of pixels) exceeds the number of available samples, sometimes by a large margin.
However it has been established in statistics that under these conditions and some fairly general modeling assumptions, the eigenvectors and eigenvalues of covariance matrices cannot be estimated reliably.
Several techniques to remedy this problem have been proposed. These techniques typically make specific assumptions about the structure of the covariance matrix and assume that this structure is known in advance.
In this thesis we propose a new method for automatically learning non-local structure in the covariance matrix in a
data-dependent way. This learned structure is then used to improve inference for methods like Eigenfaces. Unlike most existing methods in computer vision and statistics we do not make any assumptions about the spatial (pixel) proximity structure of the data.
We provide theoretical results indicating that our methods may overcome the problem of insufficient data. We
evaluate the performance of our algorithms empirically and demonstrate significant and consistent improvements over traditional Eigenfaces as well as more recent techniques, such as 2D PCA, Euclidean Banding and
thresholding for a wide range of parameter settings.