- This again assumes that you've already done the transpose operation on A. If you're including that in your computation time, then it's no longer the case that computation will be independent of m, since the transpose operation itself has complexity that's linear in m.
- Complexity is a limiting characteristic, i.e., it characterizes how computational burden grows with m when m is already large. It need not hold for m small.
- On top of that, complexity is a sort of "average growth" rate of computational times. Actual computational times are subject to some randomness arising from several different sources, however. As a result, we don't expect to see computation times to be exactly the same when increasing m even when m is large to begin with.
Sparse matrix multiplication complexity and CPU time
3 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I am multiplying two sparse matrices $A$ and $A^T$ such that I have $A^T*A$. From what I know the complexity of this operation depends on nnz(A). Yet, when I generate a random matrix with fixed nnz(A) but increasing number of rows and compute the product $A^T*A$ I see that the CPU time it takes to perform this operation increases. Am I missing something ?
0 commentaires
Réponse acceptée
Dana
le 9 Sep 2020
Assuming by A^T you mean the transpose of A, and assuming you already have A and A^T stored, then yes, the complexity of A^T*A should depend only on nnz(A) and on the number of rows A^T has (which is equal to the number of columns A has). So if you increase the number of rows m of A but keep the number of columns the same, computing time should eventually stop increasing with m. There are some important caveats here, though:
Plus de réponses (0)
Voir également
Catégories
En savoir plus sur Sparse Matrices dans Help Center et File Exchange
Produits
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!