Why fminunc does not find the true global minimum?

3 vues (au cours des 30 derniers jours)
MRC
MRC le 3 Fév 2014
Commenté : MRC le 3 Fév 2014
Hi all, I should solve this unconstrained optimization problem (attached). I know that the function has the global minimum at [1 2 2 3]. However, if I set as starting value [1 2 2 3], the algorithm ends up at [1.1667 2.4221 2.2561 3]. I have some doubts to clarify (I'm not familiar with this topic, sorry for my trivial questions):
1) The algorithm output reveals that at iteration 0 the function takes value 5.47709e-06 and at iteration 10 the function takes value 1.41453e-06. But, if I compute the function value at [1 2 2 3] I get 1.4140e-06 and if I compute the function value at [1.1667 2.4221 2.2561 3] I get 1.5635e-06. Why are these values different from the starting and final function values reported in the algorithm output?
2) How can I force the algorithm to keep searching until it arrives at [1 2 2 3]?
Thanks!

Réponse acceptée

Matt J
Matt J le 3 Fév 2014
When you call fminunc with all 5 outputs
[x,fval,exitflag,output,grad,hessian]= fminunc(...)
what are the values of these outputs?
In particular, if the true Hessian is singular at the global min [1 2 2 3], I can imagine its finite difference estimate, as computed by fminunc, could be numerically problematic, e.g., not positive definite.
  7 commentaires
Matt J
Matt J le 3 Fév 2014
Modifié(e) : Matt J le 3 Fév 2014
my problem cannot be reparameterize in the way you suggest.
Forget it. Nevertheless, your code can be cleaner and more efficient. Below is my idea of what it should look like. Notice that there is alot that you can pre-compute in the interests of speed. Notice also the more modern way of passing fixed data and parameters to functions.
thetatrue=[1 2 2 3];
mu = [0 0];
sd = [1 0.3; 0.3 1];
A1(:,2)=-ix(:,1); A1(:,1)=-1;
A2(:,2)=-ix(:,2); A2(:,1)=-1;
cdfun=@(x) mvncdf( [A1*[x(1);x(3)], A2*[x(2);x(4)] ],mu,sd);
W1=cdfun(thetatrue);
W2=1-W1;
options=optimset('Display','iter','MaxIter',10000,'TolX',10^-30,'TolFun',10^-30);
theta0=[1 2 2 3]; %Starting values
[theta,fval,exitflag,output]=...
fminunc(@(x) log_lik(x, cdfun,W1,W2),theta0,options);
function val=log_lik(theta,cdfun,W1,W2)
z=cdfun(theta);
val=-sum(W1.*log(z)+W2.*log(1-z));
MRC
MRC le 3 Fév 2014
thank you!

Connectez-vous pour commenter.

Plus de réponses (1)

Alan Weiss
Alan Weiss le 3 Fév 2014
I did not look at your data. But I doubt that the true global minimum is at [1 2 2 3] if you are really fitting to data. I would bet that you generated data from a known distribution, and then fit the model to that data. You will never get perfect match to the initial distribution, because the data that you used is not perfectly distributed according to the theoretical distribution.
For instance, this toolbox example shows theoretical parameters of [1 3 2], and yet the fitted model has parameters [1.0169 3.1444 2.1596], and the fitted model is at a global minimum for that data set.
Alan Weiss
MATLAB mathematical toolbox documentation
  1 commentaire
MRC
MRC le 3 Fév 2014
If the function I maximized was the sample log-likelihood function for Y|X~f(theta), then you were right; but the function that I maximize is the expected log-likelihood and the data are just the X I'm conditioning on. I'm sure that the expected log-likelihood is maximized at [1 2 2 3] (I have some theoretical results which actually show this). Is there anyone who can help me in answering questions 1 and 2?

Connectez-vous pour commenter.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by