The general idea of cleansing a correlation matrix via random matrix theory is to compare its eigenvalues to that of a random one to see which parts of it are beyond normal randomness. These are then filtered out and one is left with the non-random parts.
A one-dimensional analog would be to compare the return density distribution of a stock to the normal distribution and filter its normal volatility out. One should be left with the non-random distribution of that stock.
I have two questions: Is my reasoning, i.e. the intuition behind using RMT and my analogy, correct and if yes, is something like the one-dimensional case done (or would it make sense to do it)?
Answer
This is correct: "The general idea of cleansing a correlation matrix via random matrix theory is to compare its eigenvalues to that of a random one to see which parts of it are beyond normal randomness."
This is not correct: "These are then filtered out and one is left with the non-random parts."
The term "filtering", although used extensively in the literature, is misleading because the eigenvectors from the original correlation (or covariance matrix) remain part of the matrix and the sum of the eigenvalues does not change.
In all RMT filtering procedures, the matrix is decomposed and re-built via the eigenvector decomposition theorem:
$$ correlation \;\; matrix = eigenvectors * diag( eigenvalues ) * t(eigenvectors) $$ Notation: t() is the transpose operator, * is matrix multiplication, and diag() is an operator that creates a diagonal matrix
In RMT we only perform operations on the diagonal matrix of eigenvalues resulting from the decomposed matrix. The eigenvectors are left untouched. Also, the sum of the eigenvalues is preserved pre-and post cleansing. Since the sum of the eigenvalues also equals the trace of the covariance matrix, this ensures that the variance of the system is preserved.
To recap, all RMT procedures follow this four-step process:
The first step is, as you point out, identifying the upper noise band of eigenvalues predicted by a de-meaned random matrix of the same sigma using the Marcenko-Pastur law. Matrices that are exponentially weighted use a Power-law.
The matrix is decomposed via the eigenvector decomposition theorem
The diagonal of eigenvalues is cleansed. For example, the method of Laloux (1999, 2000) is to assign the eigenvalues for all "noisy" eigenvalues beneath the upper-noise band to the average of all such noisy eigenvalues. There are several variations in what new eigenvalues you assign to the noisy eigenvalues (power-law method, Krzanowski, etc.). Regardless - in all RMT methods the sum of all the eigenvalues is preserved post-cleansing so you cannot set these to zero.
Now we re-build a "cleansed" or "filtered" covariance matrix using the same eigenvectors and the revised eigenvalue via the eigenvector decomposition theorem (in reverse this time with the same eigenvectors).
Digression: You could come up with your own techniques so long as the constraints above are respected. For example, since we know that the principal/top eigenvectors have significant non-normal structure or skew, you could "filter" those eigenvalues that correspond to eigenvectors that exhibit normal structure. Also, for what it's worth, in my experience filtering methods that have a successively increasing distance from one eigenvalue value to the next perform better (i.e. for example a recursive definition such as the value of the 2nd smallest eigenvalue is twice the value of the smallest eigenvalue, the value of the 3rd smallest eigenvalues is twice the value of the 2nd smallest eigenvalue, etc. up to the last noisy eigenvalue).
Also, one of the most accessible introductions to Random Matrix theory are the papers by V. Plerou et al.
The Capital Fund Management Team - Bouchaud, Pafka, Potters, et al has more frontier research on RMT.
No comments:
Post a Comment