# finding the maximun of a function along with hessian matrix

5 views (last 30 days)
ektor on 24 Dec 2017
Edited: Matt J on 24 Dec 2017
Dear All,
I have the following function
f=-1/2*log(2*pi) -.5*(x+a*b) -.5*y./exp((x+a*b) ) -1/2*log(2*pi*c)-0.5*(x-d*(1-a))^2/c;
where a,b,c,d and y are known values.
I want to find the maximun of this function and to evaluate the hessian of this function at the maximun point. So I use the Newton Rapshon method
err=inf;
xxx=x0;% x0 is the initial point
while norm(err)>10^(-6)
g = .5*(y/exp(xxx + a*b)-1) -(xxx-d*(1-a))/c; % gradient
H1= -0.5*y/exp(xxx + a*b)-1/c; % Hessian
err=H1\g;
xxx=xxx- err;
end
Then, I obtain the resulting xxx, say xxx* and I evaluate H1 at this xxx*.
The above algorithm is part of a bigger code. However, when I use the fminunc to find the maximun (of the minus f) instead of the Newton rapshon method:
options=optimset('LargeScale','off','display','off','TolFun',0.0001,'TolX',0.0001,...
[x,fval,exitflag,output,G_sum,H]=fminunc('function',x0,options,...
y, a, d, c, b);
where 'function' is the minus target fucntion (-f), I observe that my code works a lot better, albeit it is much slower.
Any ideas why there is such a difference and is there a way to use a more efficient Newton Rapshon method which is much faster?

Matt J on 24 Dec 2017
Edited: Matt J on 24 Dec 2017
Well, for one thing, fminunc's algorithms are not the same as what you've implemented. They have elements in common with Newton-Raphson, but they are not pure Newton-Raphson. Pure Newton-Raphson is not generally that reliable.
Another reason is that you have turned 'GradObj' and 'Hessian' off. Therefore, fminunc has to do extra work to numerically approximate the derivatives.
FInally, the fminunc code is much more general and heavily featured than yours - it runs error checks and supports various user options that your custom code does not.