Support vector Machine parameters matlab
16 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I am working on my artificial intelligence problem and I am following the instructions from this example:
There, they use a support vector machine to classify:
classifier = fitcecoc(trainingFeatures, trainingLabels, ...
'Learners', 'Linear', 'Coding', 'onevsall', 'ObservationsIn', 'columns');
I Tried this example with my own data set and It has an acurracy of 89.5% it works pretty well, But now I would like to try with my own SVM with my own settings instead of the default settings.
I read in the documentation that fitcecoc uses a SVM with a Linear Kernel by default, now I would like to try different kernels for instance Gaussian and Polynomial.
I know for the Machine learning course of coursera that SVM have a parameter ( Andrew NG refers to it as C) and also each kernel has it own parameter. Also I found info about the kernels parameters in this Mathworks URL:
According to that link....
- Gaussian kernel has its parameter SIGMA*And Polynomial Kernel has its paramter *P which is the order of the polynomial func
So I wrote Down this code:
Oursvm = templateSVM('KernelFunction','polynomial');
classifier = fitcecoc(trainingFeatures, trainingLabels,'Learners',...
Oursvm,'Coding', 'onevsall', 'ObservationsIn', 'columns');
Now, I would like to change the P parameter, In the Template SVM Doumentation I found that I can set it like this:
Oursvm = templateSVM('KernelFunction','polynomial','PolynomialOrder',9);
The default value is 3, but no matter which number I use for PolynomialOrder , the accurracy is always the same 3.2258 for p = 1 Or p = 2 or even p = 9
Isn't it weird?
- What am I missing?
- Also How can I set the SIGMA parameter for the gaussian kernel? because training with the default configuration the acurracy is very Low, And in the SVM template documentation they dont specify how to set this parameter clearly.
- How can I set the C parameter of my SVM?
- Finally I Have read that you need at least 10 times training samples than dimensions of the input data, how is it possible that the deep learning example uses only 201 samples (67 for each class, three classes total) if the dimensions of the input data is 4096?
1 commentaire
David Resendes
le 6 Mai 2017
Hi, Manuel.
I address your questions below. First, though, I'm noticing that you are adjusting hyperparameters, but I'm not certain how you're checking the accuracy of the fitted models. Are you using a test set or the training data?
RE "What am I missing": Suppose that the decision boundary is quadratic. Would you expect a polynomial kernel of degree 9 to do any better than a polynomial kernel of degree 2? I suspect that you're using the training data to check the models' accuracy. Try using a test set or implement cross-validation. In general, a plot of the generalization error with respect to model complexity is bowl shaped, that is, a model with a degree 9 kernel might perform just as well as a model with a degree 2 kernel on the training data, but it might not generalize as well.
RE "setting SIGMA": Use the 'KernelScale' name-value pair. I will clarify this in the documentation.
RE "setting C": Use the 'BoxConstraint' name-value pair.
RE "10 times obs. than predictors": It's very likely that someone else in the community can speak to this better than I can. However, I can say that training an algorithm takes time. It's likely that a simpler data set was used in the example so that you can run the example and get the expected results, and interact with the code, in a reasonable amount of time.
Réponses (0)
Voir également
Catégories
En savoir plus sur Classification Ensembles dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!