Effacer les filtres
Effacer les filtres

Improving Precision of Eigenvectors with Large Eigenvalues

98 vues (au cours des 30 derniers jours)
bil
bil le 24 Juin 2024 à 3:13
Commenté : Christine Tobler le 26 Juin 2024 à 6:25
Hi all,
This is a bit of a generic question, but I was hoping someone could provide some insight on how I could improve the precision of eigenvectors using "projection techniques", like in this post, where a matrix will have 1 or 2 very large eigenvalues, and the remaining eigenvalues are much smaller (by several orders of magnitude). They have code written in R, which I am not too familiar with, but is the idea generically that I should subtract out the overlap of smaller eigenvectors with those of larger eigenvectors to improve their precision? When I say precision, what I mean is that applying eig to a matrix M will generate a set of eigenvectors, but once I check M*v - λ*v, the resultant array will deviate from 0, i.e. applying M to the "eigenvector" has resulted in a linear combination of multiple other eigenvectors.
Any guidance is appreciated.

Réponse acceptée

Christine Tobler
Christine Tobler le 24 Juin 2024 à 10:04
The linked post is about a symmetric matrix, is this also your case? In that case (if issymmetric returns true for your matrix), the eigenvectors returned by EIG will be orthogonal up to numerical round-off (an exact 0 is not possible in numerical computation, practically speaking).
The proposed solution doesn't improve the eigenvectors, but instead applies a projection to the computed residual vectors, to project out any components along the eigenvectors of the larger eigenvalues. Whether this is useful will depend on your application - that is, do you need to do computations with that residual?
  11 commentaires
Torsten
Torsten le 26 Juin 2024 à 1:03
But I suspect this might not be the most accurate and the most straightforward method is, as you said, to just take the reciprocal of the eigenvalues of M to get the eigenvalues of M^(-1), and the eigenvectors are the same for both matrices.
Yes, at least I cannot think of any advantage to work with the inverse for your case.
Christine Tobler
Christine Tobler le 26 Juin 2024 à 6:25
Closing the loop, I agree with Torsten that computing the eigenvalues and eigenvectors of the original matrix and then inverting the eigenvalues will be more accurate than calling eig on the inverse.

Connectez-vous pour commenter.

Plus de réponses (0)

Catégories

En savoir plus sur Linear Algebra dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by