How can I define a function as the maximum of multiple functions

25 vues (au cours des 30 derniers jours)
Bo He
Bo He le 17 Juin 2020
Modifié(e) : John D'Errico le 17 Juin 2020

Réponses (3)

Ameer Hamza
Ameer Hamza le 17 Juin 2020
Modifié(e) : Ameer Hamza le 17 Juin 2020
Create a function like this
function y = maxFun(varargin)
y = @(x) max(cell2mat(cellfun(@(fun) {fun(x(:))}, varargin)).');
end
it takes function handle as input and returns a function handle. For example
f = maxFun(@(x) 0.3*x, @sin, @cos); % f is function handle
x = 0:0.01:5;
plot(x, f(x))

Walter Roberson
Walter Roberson le 17 Juin 2020
f = @(x) max(cellfun(@(h) h(x), hi))
Note that max() is not continuously differentiable, so you cannot use fmincon() or anything similar to optimize it; you would need to use ga() if you wanted to optimize.

John D'Errico
John D'Errico le 17 Juin 2020
Modifié(e) : John D'Errico le 17 Juin 2020
This is not truly an answer to the question, but an alternative way of viewing the problem, in the event that the goal here is to perform an optimization. Walter brought up that point in his answer.
As a function itself, if the individual functions are all linear (as was indicated by the OP) then the maximum of a set of linear functions will itself describe a convex domain. If a minimum of the aggregate "function" exists, then fmincon would be able to handle the transitions, and walk down hill to a min.
It is easy for the problem to be unbounded. E.g., if we have
H1(x) = x + 1
H2(x) = 2*x - 3
then we see argmin of max(H1(x),H2(x)) is an unbounded problem as x--> -inf.
Here I'll add a third constraint that makes the problem well posed.
H1 = @(x) x+1;
H2 = @(x) 2*x - 3;
H3 = @(x) -4 - 3*x;
fplot(H1,[-3,5])
hold on
fplot(H2,[-3,5])
fplot(H3,[-3,5])
So we can see the max is either the red blue or green lines, depending on the value of x. The overall minimum of that function will lie at an intersection point of two or more of the lines representing the boundary of the feasible set.
Yes, I could solve it using fmincon directly, as a one variable problem.
[xval,fval,exitflag] = fmincon(@(x) max(H1(x),max(H2(x),H3(x))),10)
Local minimum possible. Constraints satisfied.
fmincon stopped because the size of the current step is less than
the value of the step size tolerance and constraints are
satisfied to within the value of the constraint tolerance.
<stopping criteria details>
xval =
-1.24999998707592
fval =
-0.24999998707592
exitflag =
2
The mini-max point is apparently located at x = 1.25, with a function value of -0.25. fmincon has done reasonably well, as I would expect here. I forced it to start at x==10, so fmincon needed to step over a derivative singularlity around x==4, which would not be a problem.
A better way to implement such a problem even in fmincon is to NOT define the function as the max of a set of functions however, but to define them as LINEAR constraints. Since all linear constraints will be satisfied at the solution, the problem is well posed and now the objective is itself differentiable. That is, we will minimize y, such that
y >= H1(x)
AND
y >= H2(x)
AND
y >= H3(x)
Since the constraints are linear, but fmincon wants linear constraints to be of the form A*X <= B, write the problem in the (x,y) plane as a 2 variable problem like this:
A = [1 -1;2 -1; -3 -1];
B = [-1; 3; 4];
[Xval,fval,exitflag] = fmincon(@(X) X(2),[10,20],A,B)
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
<stopping criteria details>
Xval =
-1.24999998666573 -0.249999960000499
fval =
-0.249999960000499
exitflag =
1
Again, fmincon has found the solution. The objective function is now even fully differentiable. The constraints are well posed. All is good in the world.
Of course, better yet is to recognize the problem is itself fully linear, and thus just use linprog.
A = [1 -1;2 -1; -3 -1];
B = [-1; 3; 4];
[X,FVAL,EXITFLAG] = linprog([0 1],A,B)
Optimal solution found.
X =
-1.25
-0.25
FVAL =
-0.25
EXITFLAG =
1
Of course, this is the best approach of all, since linprog now finds what is as accurate a solution as possible. And linprog does not even need to do anything special like compute the gradient of the function.
All of this is probably just a waste of mental effort, since there is no reason to assume the problem is in fact going to be an optimization problem. But it is worth while pointing out the difference between treating the functions H_i as a part of the objective, and treating tham as linear constraint boundaries.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by