Optimization of Logistic Function Variable K for multiple Inputs Simultaneously

1 vue (au cours des 30 derniers jours)
AES
AES le 18 Juin 2022
Commenté : AES le 18 Juin 2022
I am trying to optimize the 'slope' variable (k) for the logistic function:
Where L = 2, and my input values (more details in a bit) are the (x -x0) term, and k is the term I am trying to determine for a known output(f(x)).
I am trying to squash some values using this function so that the tranformed values lie between -1 and 1. In my code I refer to K as alpha.
The major issue I am having is that I want to find one output k for tranforming:
promAngles = [146.7589 115.7733 98.1666 66.8909 41.9377 26.2680 11.4212 -29.8628 -45.2301 -68.8243 -100.6234 -117.7418 -147.8271]
into this:
sC = [1.0000 0.9703 0.8210 0.5797 0.4067 0.2974 0.1467 -0.2974 -0.4067 -0.5797 -0.8210 -0.9703 -1.0000]
Invalid use of operator.
At first I tried this which I think is a correctly rewritten expression for the logistic function in terms of K but it returns a vector of Ks that make it true. PromAngles are my data:
eqnNew = -(log((power((1 + (2./(sC+1))), 1./promAngles))));
which returns
newVarTemp = [ -0.0047 -0.0061 -0.0075 -0.0122 -0.0211 -0.0355 -0.0884 0.0451 0.0326 0.0254 0.0248 0.0359 0.1408]
However I am looking for a single value that minimizes the error between promAngles and sC.
Below is something else I have tried, which I put a value in for the k variable as well as my input values which exist in a 1x13 vector.
%Possible alpha candiates
candVals = [0:0.001:100];
%Logistic Function
functAlpha = @(inputCurv, estAlpha) ((2./(1 + exp(-1*estAlpha.*inputCurv))) - 1);
lossList = nan(length(candVals), 1);
for cand = 1:length(candVals)
%Produces the scaled outputs associated with the alpha input
fAlpha = functAlpha(promAngles, candVals(cand));
%Determines the loss
fLoss = norm(fAlpha - sC, 2);
lossList(cand) = fLoss;
end
%Find the alpha with the minimum loss --> should allow to produce desired
%outputs (sC(2, :))
[minVal, linIndx] = min(lossList);
This produces the minimum error at alpha == 0.0230, which in turn has this as the output:
fAlpha = [0.9339 0.8696 0.8106 0.6465 0.4481 0.2932 0.1306 -0.3305 -0.4778 -0.6592 -0.8201 -0.8750 -0.9354]
Which is close, but not as close as I can be. Is there a better way to do this than increasing the sampling for the candidate k values?
Let me know if I should clarify more

Réponse acceptée

Torsten
Torsten le 18 Juin 2022
Modifié(e) : Torsten le 18 Juin 2022
I think you will have to work on the approximation function "fun" in order to get better results.
promAngles = [146.7589 115.7733 98.1666 66.8909 41.9377 26.2680 11.4212 -29.8628 -45.2301 -68.8243 -100.6234 -117.7418 -147.8271];
sC = [1.0000 0.9703 0.8210 0.5797 0.4067 0.2974 0.1467 -0.2974 -0.4067 -0.5797 -0.8210 -0.9703 -1.0000];
fun = @(K) (2./(1 + exp(-K*promAngles)) - 1) - sC;
k0 = 0.02;
k = lsqnonlin(fun,k0)
Local minimum possible. lsqnonlin stopped because the final change in the sum of squares relative to its initial value is less than the value of the function tolerance.
k = 0.0227
fun(k);
plot(promAngles,sC);
hold on
plot(promAngles,fun(k)+sC)
  2 commentaires
AES
AES le 18 Juin 2022
As a heads up to anyone else looking at this solution, I made a small error when writing the question which should have a + the exp instead of the -. I have corrected the question, however the accepted answer just needs to swap that - to a +. Thank you for your help
Torsten
Torsten le 18 Juin 2022
I changed the code accordingly.

Connectez-vous pour commenter.

Plus de réponses (1)

Sam Chak
Sam Chak le 18 Juin 2022
Is it allowed to fit using other type of function?
promAngles = [146.7589 115.7733 98.1666 66.8909 41.9377 26.2680 11.4212 -29.8628 -45.2301 -68.8243 -100.6234 -117.7418 -147.8271];
sC = [1.0000 0.9703 0.8210 0.5797 0.4067 0.2974 0.1467 -0.2974 -0.4067 -0.5797 -0.8210 -0.9703 -1.0000];
fun = @(p) (1/4)*(-p(1)*promAngles.*sign(promAngles - p(2)) - p(1)*promAngles.*sign(promAngles - p(2)).*sign(promAngles + p(2)) + p(1)*promAngles.*sign(promAngles + p(2)) + 3*sign(promAngles - p(2)) - sign(promAngles - p(2)).*sign(promAngles + p(2)) + sign(promAngles + p(2)) + p(1)*promAngles + 1) - sC;
p0 = [0.01 105];
p = lsqnonlin(fun, p0)
Local minimum found. Optimization completed because the size of the gradient is less than the value of the optimality tolerance.
p = 1×2
0.0086 105.0000
plot(promAngles, sC, 'ro', promAngles, (1/4)*(-p(1)*promAngles.*sign(promAngles - p(2)) - p(1)*promAngles.*sign(promAngles - p(2)).*sign(promAngles + p(2)) + p(1)*promAngles.*sign(promAngles + p(2)) + 3*sign(promAngles - p(2)) - sign(promAngles - p(2)).*sign(promAngles + p(2)) + sign(promAngles + p(2)) + p(1)*promAngles + 1), 'b-')
grid on
legend('Data', 'Best fit', 'location', 'best', 'FontSize', 14)
xlabel('promAngles')
ylabel('sC')
  1 commentaire
AES
AES le 18 Juin 2022
Thank you for your answer. I suppose I could if it appropriately decreases the error. I noticed the function you used does that, however, since I am working with curvatures of a shape, I am not sure whether it is better to have larger error, and display the curvatures as distinct. Or choose a lower error funtion but risk approximating two curvatures as approaximately the same (as done with the highest and lowest values in promAngles). I appreciate the help, I will think more about the risks/benefits associated with using a different fitting function

Connectez-vous pour commenter.

Catégories

En savoir plus sur Get Started with Optimization Toolbox dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by