# fminunc gets first-order optimality of zero on iteration 0, does not find optimum

10 views (last 30 days)
Koen Franse on 11 Aug 2020
Commented: Koen Franse on 17 Aug 2020
Hi everyone,
I am trying to optimize a parameter of a Finite Element model by using fminunc. However, the optimization algorithm finishes on iteration 0, with a first-order optimality measure of 0, and therefore does not find the optimum value for the parameter:
I have run my Finite element model with a few values for the parameter that I want to optimize:
As it looks to me, there is a clear global minimum, but for some reason the fminunc doesn't find it now. Does anyone know how to fix this problem?
Thanks in advance, Koen

John D'Errico on 11 Aug 2020
Edited: John D'Errico on 11 Aug 2020
First, this appears to be a ONE parameter optimization. Do not use fminunc for that. Instead, us fminbnd. It will be more robust. It may even be more efficient.
Second almost always when someone says what you have, this means the function is coded incorrectly. Before performing an optiimization, test to see if the objective function changes for different inputs. The response you got is the response you would see if a function is just a constant.
For example, see what happens here:
>> testfun = @(x) 1;
>> testfun(1)
ans =
1
>> testfun(pi)
ans =
1
Anything I send into testfun, I get 1 out.
Now, what happens when I try fminunc here?
>> fminunc(testfun,pi)
Initial point is a local minimum.
Optimization completed because the size of the gradient at the initial point
is less than the value of the optimality tolerance.
<stopping criteria details>
ans =
3.1416
I get exactly the same result as you got.
So first, verify that if you send different input parametrs into your function, that you get something differnt.
Next, verify that your function is differentiable. fminunc REQUIRES this. If you do something inside that rounds the inputs or the output, then fminunc cannot be used. Again, an example should suffice:
>> testfun = @(x) round(x);
>> fminunc(testfun,1.1)
Initial point is a local minimum.
Optimization completed because the size of the gradient at the initial point
is less than the value of the optimality tolerance.
<stopping criteria details>
ans =
1.1
So for ANY x in the vicinty of an integer, you get the same integer out. NO change. As far as fminunc is concerrned, this is a constant function, and it will terminate immediately.
testfun(.5:.1:1.4)
ans =
1 1 1 1 1 1 1 1 1 1
Why try to optimize something that does not change?
The odds are therefore good, that if you think your function should produce some non-constant response, then you have a bug in your code. We cannot diagnose that without seeing the code of course, and my MATLAB crystal ball is always on the fritz. :)
Koen Franse on 17 Aug 2020
Thanks for the tip Alan, scaling was indeed the problem. I now have the optimization working correctly.
For other people that might run into similar problems in the future; what I did is take
YM_scaled = log(YM)
as the input parameter for fminunc, and inside the objective function used the 'real'
YM = exp(YM_scaled)
as an input for my FE model. Finally, as the output of my objective function, I took
MSE_scaled = log( (1/(2*m))*sum( (displ_ref - displ ).^2 ) )
So now fminunc uses the log-values for both the input and the output, and in fact is optimizing the function I plotted above without the logarithmic axes. In this way fminunc can find an optimal value.

Bruno Luong on 14 Aug 2020
Edited: Bruno Luong on 14 Aug 2020
If your gradient is 0, then your FEM returns the exact same result for two different values of YM.
This can be created by many things, such as truncation of the interface between MATLAB and your FEM SW, some thresholing in the equation you are trying to solve, mesh generator, etc... No one can tell since you did not disclose to us the details of FEM part.
Koen Franse on 14 Aug 2020
Okay, that explains a little more to me. However there is still one thing I don't completely understand: if I understand the FMINUNC/FMINCON solvers use finite differencing to compute the gradient for each model evaluation. I tried some subtle differences in my YM parameter, which resulted in differences in the value computed by my objective function. In fact that is what the finite differencing method does to approximate the gradient right? Therefore I still do not understand why the FMINUNC solver finds gradients of exactly zero.
You say I selected the wrong optimization method for this problem; do you maybe have a suggestion for a better alternative?

Bruno Luong on 14 Aug 2020
Again the step you tried might not be the step FMINUNC selects. It has a bunch of decision tree behind FMINUNC. I don't know why the FEM returns the same value for 2 different YM, but obviously that happens. You might be able to track the step FMINUNC used by adding some instrumental code in your objective function.
Again I don't know how your FEM works, I can't recommend you the method. Your function looks also having a very narrows valley where each side ithat look like a concave shape. This also not something gradient method like.
But again the problem seems that the FEM have some thresholding calculation on YM and it makes your objective function non-smooth thus you can't use any gradient method.
I start to repeat myself a lot.

R2019a

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by