Deep Learning - Distributed GPU Memory
Afficher commentaires plus anciens
Hello,
I have many, very large input matrices (detector values) connected by Fully Conectet Layer and the output is a Regression Layer to reconstruct an image from it (Only 1 image is used at a time!). Due to the lack of local correlation, a Fully Connected Layer is necessary and CNN cannot be used. But this is beyond the VRAM.
- Therefore the question if Matlab with Fully Connected can distribute this?
I have the choice to buy 2x rtx 8000 (2x48GB) or 4x Titan RTX (4x24GB). A RTX 8000 costs 2.5x as much as the Titan RTX and has only the same performance but with twice the memory of a Titan RTX.
2. NV-Link distribute GPU-RAM?
Thanks
Réponses (1)
Joss Knight
le 28 Mar 2020
0 votes
No, there is nothing like what you are after, to distribute the weights of a fully connected layer across multiple GPUs. You could implement it yourself using parallel language constructs, but I assume this is not what you're after.
Catégories
En savoir plus sur GPU Computing dans Centre d'aide et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!