How to avoid NaN in the Mini-batch-loss from traning convolutional neural network?

337 views (last 30 days)
I´m working on training a convolutional neural network following the example from I have 2000 images in each of 8 label categories and use 90% for training and 10% for testing. The images are in .jpg format and have a size 512x512x1. The arciture of the CNN is currently as follows:
layers = [imageInputLayer([512 512 1])
options = traningOptions('sgdm','MaxEpochs',15,'InitialLearnRate',0.001, 'ExecutionEnvironment', 'parallel' );
After training the first epoch the mini-batch loss is going to be NaN and the accuracy is around the chance level. The reason for this is probably that the back probagating generates NaN weights.
How can I avoid this problem? Thanks for the answers!
Greg Heath
Greg Heath on 7 Sep 2017
Comment by Ashok kumar on 6 Jun 2017
What is the mini batch loss in the table in command window and how it is calculated ??

Sign in to comment.

Accepted Answer

Javier Pinzón
Javier Pinzón on 8 Sep 2017
I will provie the best comments as an answer that can help to solve this problem o NaN Accuracy:
Hello everybody,
Because i have been experienced some issues with PNG format images, I highlight recommend to use JPG/JPEG format, that is because sometimes, due to some layers that a PNG image has, it take the last layer and the image becomes the color of this layer, i.e., all the image is converted to a black or red... image. so, when you send these image to the network, it only will se one color image... nothing related to the rest of the images and the network will not be able to learn the features. Also be careful with the size of your filters. Also Johannes answer might be a solution in some cases.
Be careful with the size of your input image... When it is really big, as happened with Alexander, using only one convolution will be really difficult to the network to learn, because will have only one structure of weights for a really big amount of features that the network want to learn. I would recomend use at least 2 or 3 convolution for that size, even a size of 128x128, and to use Pooling layers to reduce the size that will enter to the Fully-conneced layer, because it will help but to classify the features extracted.
To initialize the weights, you need to define the convolution layer before the Layer struct:
conv1 = convolution2dLayer(F,D,'Padding',0,...
conv1.Weights = gpuArray(single(randn([F F 3 D])*0.0001));
conv1.Bias = gpuArray(single(randn([1 1 D])*0.00001+1));
You can initialize weights and the bias if needed. Remember, D is the amount of Filters to be used and F the size of the filter. Then, call your variable in the layer struct
layers = [ ...
imageInputLayer([128 128 3]);
and that is all.
Hope it helps,

More Answers (3)

Khalid Babutain
Khalid Babutain on 18 Oct 2019
I came across this issue because I had it, and I was able to solve it by only lowering the Initial Learning Rate from ('InitialLearnRate',1e-3) to ('InitialLearnRate',1e-5)

Salma Hassan
Salma Hassan on 20 Dec 2017
i have this line in my code [trainedNet,traininfo] = trainNetwork(trainingimages,Layers,opts); when i opened the structure traininfo i got the values of training accuracy and training loss but in the validation (accuracy , loss) i got only the first value and the rest is nan.. what is the problem in this case ??
Salma Hassan
Salma Hassan on 31 Dec 2017
Mr Javier Pinzón, i did a separate question with title "what is causes NaN values in the validation accuracy and loss from traning convolutional neural network and how to avoid it? " in this link

Sign in to comment.

Poorya Khanali
Poorya Khanali on 10 Feb 2021
I have a ResNet when the image size is 35*60 everything works fine (no NaN during the training), but when I change the image size to 59*60 (for different data) the network at the beginning seems to work, but after some epochs the NaN starts to appear. Could you please help me out!

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by