Neural network training with single GPU acceleration problem

3 vues (au cours des 30 derniers jours)
Botond Szilagyi
Botond Szilagyi le 3 Juil 2017
Hi All, I am trying to train a NN using single GPU since the pure CPU implementation even with multicore calculations is slow and I expect significant speedup on GPU. The training of the original network works well, but I get an error message when I try to switch to GPU. I've step back and tried to solve the example problem for GPU accelerated ANN training, available in the Matlab help, which worked well. I tried to solve the problem on different machines, unsuccessfully.
About the problem: the size of input data set is (2X1000000) while the size of output is of (91X1000000). Regardless of network structure and training/performance functions, I always get the same error, if the 'UseGPU' is set to 'yes':
Reference to non-existent field 'xoffset'. Error in nnGPUOp.formatNet (line 32) net.outputs{i}.processSettings{j}.onGPU = { ... Error in nncalc.setup2 (line 10) calcNet = calcMode.formatNet(calcNet,calcHints); Error in nncalc.setup (line 17) [calcLib,calcNet] = nncalc.setup2(calcMode,calcNet,calcData,calcHints); Error in network/train (line 357) [calcLib,calcNet,net,resourceText] = nncalc.setup(calcMode,net,data);
I also observed that if I set up the network inversely (I set the output (91*1000000) as the input and input (2X1000000) as output) the training works (and is considerably faster than the pure CPU version), thus I assume there is some problem with the size of inputs and targets. It is still not clear, why it appears for GPU accelerated solver and how could I eliminate it?
Thank you, Botond Szilagyi

Réponses (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by