Solving large linear systems of equations in parallel / distributed systems

20 vues (au cours des 30 derniers jours)
What would be the best approach to solve on a distributed system a large systems of linear equations? The type of systems that could be translated into a large dense square symmetric matrix of say 60,000 * 60,000 elements to start with.
  3 commentaires
Steven Lord
Steven Lord le 25 Jan 2023
If you're solving a linear system of equations, in addition to not inverting the matrix you might be able to get away with not creating the matrix at all!
Most or all of the iterative solvers can accept either a coefficient matrix or a function handle that evaluates some type of matrix-vector product. If your matrix has some structure that you can exploit to compute those products without explicitly creating the matrix that can save you memory. See for example the "Using Function Handle Instead of Numeric Matrix" example on the gmres documentation page.
Ben238
Ben238 le 27 Jan 2023
Thank you for your comments and yes @John D'Errico the very reason of my question is to ponder whether it is a good idea or not, albeit I admit my question was ill-formulated. And yes it comes down to solving a linear system of equations.
QR decomposition or actually SVD would be the "traditional approaches" we are "familiar" with but some basic search on solving large linear systems of equations have come up with approaches like Stochastic Gradient Descent.
@Steven Lord thank you for pointing out the gmres documentation page, I will have a look.

Connectez-vous pour commenter.

Réponse acceptée

Walter Roberson
Walter Roberson le 25 Jan 2023
Gb = 60000^2*8/1024^3
Gb = 26.8221
The matrix is close to 27 gigabytes. The inverse is the same size, so you will need at least 54 gigabytes to store the matrix and its inverse.
Suppose you have 60000 cores (actually possible! see https://www.pcgamer.com/this-computer-is-26-inches-tall-and-houses-a-400000-core-processor/ ) and you have each one compute the minors "leave one out" style. Does that reduce the task to 60000 simultaneous computations on a 60000 x 1 matrix? NO, it reduces the task to 60000 simultaneous computations on 59999 x 59999 matrices.
To within round-off error, if you divide the work over N cores, each core still needs to work with an array just fractionally less than the original 27 gigabytes. Even supposing you could array to all use the same output array and even supposing that each core did not need an intermediate array (probably not true), if you had 128 gigabytes of total ram, you would only be able to split into at most 3 cores -- 27 * 3 slightly-different inputs + 27 output. In reality because intermediate arrays would be needed, you would probably not be able to do more than 2 cores in 128 gigabytes.
How large is your computer system? How much RAM? How many cores?
  2 commentaires
Ben238
Ben238 le 27 Jan 2023
Hi Walter,
Thank you for your reply. From the size point of view we were already well aware this was a challenge not only in terms of the RAM but also in term of communication. Thankfully these days 256Gb is not beyond reach.
Walter Roberson
Walter Roberson le 27 Jan 2023
I do not know whether using a distributed system could potentially be of use in such a situation; it probably could be in some cases, such as if you have a block-diagonal system.
I would point out, though, that the current solvers used by \ call into high-performance libraries that are already multi-core . Some of the steps such as matrix multiplication or dot products can be broken up into pieces and performed over multiple cores, and the solvers invoked by \ already know how to do that.

Connectez-vous pour commenter.

Plus de réponses (0)

Catégories

En savoir plus sur Loops and Conditional Statements dans Help Center et File Exchange

Produits

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by