[Moving down to be an Answer]
No, there is no solution for that.
Your inputfile_BJ has marg indicating starting point 10 and mean 10, with type 1 and transform type 3. That is % Normal marginal distribution with z(i,:) = ( x(i,:) - mean ) / stdv where x(i,:) is the starting point. With starting point and mean both being 10, that is going to give 0, so the result of x_to_u starts out as 0.
Then inside form,
[ G, grad_g ] = gfun(lsf,x,grad_flag,probdata,analysisopt,gfundata,femodel,randomfield);
where grad_flag is ffd which was extracted from analysis_opt . Inside gfun, case ffd, and evaluator flag set at 'basic',
[ g, dummy ] = gfunbasic(lsf,allx(:,i),'no ',probdata,analysisopt,gfundata);
The 'no ' there becomes grad_flag inside of gfunbasic .
gfunbasic hits the final "else" case, so it does
which executes gfun_BJ(Qb) where Qb = 10 because that is the starting point from marg .
gfun_BJ executes DemandSplitting(Qb) . That executes the fzero() on fun1 with Qb 10, and that works this first time. After a bunch of calculations with that Qb = 10, you get out the result that MaxFr = 44 . Then back in gfun_BJ that is subtracted from 50 to get g = 6 . This is returned as the first output of gfunbasic
Back in gfun the section for ffd, the 10 is incremented to 10.02 and gfunbasic is called again, calling gfun_BJ again calling DemandSplitting(Qb) where Qb is now 10.02 . But it turns out that after all of those calculations with Qb = 10.02, that MaxF = 44 again, even though the input was slightly different. I did not attempt to follow the computations in DemandSplitting .
Thus in the gfun section for ffd, g_a_step_ahead becomes 6 as well as g having been 6. grad_g(j,i) = (g_a_step_ahead - g)/h; and since those are both 6, that gives you a grad_g of 0.
This grad_g is returned back to form which then does
alpha = -grad_G / norm(grad_G);
but grad_G is 0, so alpha here at the form level becomes 0 / 0 = nan. That pollutes a bunch of calculations, but is not the main problem.
form continues on to
d = search_dir(G,grad_G,u)
grad_G is 0 as explained, and u is 0 as I explained early on, because data starting point 10 minus data mean 10 = 0 . search_dir does
alpha = -grad_G / norm(grad_G);
and with grad_G being 0, that makes alpha nan at this search_dir level, resulting in a search direction of nan. Then back in form,
and with d being the nan returned from search_dir, that makes u_new nan.
At that point, the code then tries to process location nan, which leads to trying to fzero() on fun1 with Qb nan and that is what leads fzero to give the error message about the initial execution not being finite that you are reporting the error message for.
In short, the code assumes that the gradient is never 0, and that turns out to be a bad assumption. Even in cases where the gradient is not logically 0, the code should have known about floating point round-off that can lead to gradients evaluating numerically as 0 to floating point precision even if they would be distinctly non-zero if calculated with sufficient precision.
The DemandSplitting code does a number of calculations and does max() . If any of the calculations is insensitive to the exact value of the input, then you run the risk of returning the same MaxFr for two different inputs, potentially leading to a gradient that is logically 0.
You could try changing your starting point in marg in your inputfile_BJ so that you might possibly get a non-zero gradient, but that would not solve the problem that the code has faulty assumptions about gradient never becoming 0.
The initial part of DemandSplitting is well commented with remarks such as "%yield strength of corroded tensile rebar" but the calculation part of it is almost uncommented.