How to speed up code using GPU?

1 vue (au cours des 30 derniers jours)
khan
khan le 10 Avr 2015
Commenté : Greg Heath le 20 Avr 2015
Hi all, I have a general question, I have a neural network where the input is 80x60x13x2000.
In current setup i take one sample (80x60x13) at a time to process it for final output. Where in the first hidden layer it becomes 76x56x11x3, in second becomes 38x28x9x3, and in third becomes 34x24x7x3.
Now can any body tell me how can i use GPU at first and third layer in such a way that it becomes faster. Previously i converted all data to gpuArray, but it became worse.
Can anybody guide me how to better utilize it?
With Best Regards
khan
  1 commentaire
Greg Heath
Greg Heath le 20 Avr 2015
Sizes of inputs, targets and outputs are 2-dimensional. I have no idea how your description relates to 2-D matrix signals and a hidden layer net topology.
Typically,
[ I N ] = size(input)
[ O N ] = size(target)
[ O N ] = size(output)
The corresponding node topology is
I-H-O for a single hidden layer
I-H1-H2-O for a double hidden layer
Please try to explain your problem in these terms.

Connectez-vous pour commenter.

Réponses (0)

Catégories

En savoir plus sur Deep Learning Toolbox dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by