Hi . I am new to DNN. I use deep neural network for binary classification but returns all zeros or ones.
2 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Tommy Bear
le 19 Fév 2020
Commenté : Daniel Vieira
le 26 Fév 2020
I've tried using machine learning approach(SVM, KNN, Tree...) and the accuracy is good.
I am interested in transfer learning so I want to build a deep learning model.
The attached picture is my training data, column 1 to 9 are features and the marked column(10) is the response which will be changed into categorical vector for training.
And here's my code for network training.
And my data for training is attached ༼ •̀ ں •́ ༽ Thanks
layers = [
sequenceInputLayer(9,"Name","sequence")
fullyConnectedLayer(12,"Name","fc_1")
reluLayer("Name","relu_1")
fullyConnectedLayer(96,"Name","fc_3")
reluLayer("Name","relu_2")
fullyConnectedLayer(48,"Name","fc_4")
reluLayer("Name","relu_3")
fullyConnectedLayer(2,"Name","fc_2")
softmaxLayer("Name","softmax")
classificationLayer("Name","classoutput")];
options = trainingOptions('adam', ...
'InitialLearnRate',0.01, ...
'LearnRateSchedule','piecewise',...
'MaxEpochs',30, ...
'ValidationData',{xtest,ytest}, ...
'ValidationFrequency',3, ...
'MiniBatchSize',1024, ...
'Verbose',1, ...
'Plots','training-progress');
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/272512/image.png)
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/272513/image.png)
4 commentaires
Réponse acceptée
Daniel Vieira
le 21 Fév 2020
I recommend normalizing your predictors, they range from 10^-5 to 10^9, pretty insane. I'd rather work with the log10 of that (ranging from -5 to 9).
I would also change the sequence input layer to an imageInputLayer with size [1 9], and possibly adapt the way you give the input data. The sequence input layer is meant for the LSTM model, not your case. The imageInputLayer, although meant for images, works just fine with vectors and matrices of any sort.
3 commentaires
Daniel Vieira
le 26 Fév 2020
it may be because your data is unbalanced, like Srivardhan Gadila said below. When it's too unbalanced, the network will nearly always answer in favor of the most frequent label, effectively "capping" accuracy, and mistaking all the less frequent labels. You should select equal (or nearly equal) ammounts of observations for each label. It should improve your accuracy.
Plus de réponses (1)
Srivardhan Gadila
le 25 Fév 2020
Seems that your dataset is unbalanced, count of sequences with label 0 is 59695 and with label 1 is 94226. This could make the learning of the network biased to label 1. Please refer to Prepare and Preprocess Data & Deep Learning Tips and Tricks for more information.
For normalization of data you can make use of the of 'Normalization' & 'NormalizationDimension' Name-Value pair arguments of the sequenceInputLayer or imageInputLayer.
0 commentaires
Voir également
Catégories
En savoir plus sur Parallel and Cloud dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!