# Is there any Matlab finction to compute an integration using Rmberg method?

2 views (last 30 days)

Show older comments

##### 1 Comment

John D'Errico
on 8 Feb 2023

@Sarvesh Kale - Well, you might be surprised. At least there is something close.

### Accepted Answer

John D'Errico
on 8 Feb 2023

Edited: John D'Errico
on 8 Feb 2023

Not exactly. For example, if you look up Romberg integration, you will find it is based on Richardson extrapolation, but at least classically, it as based on a Richardson extrapolation of the trapezoidal rule. Since I think many people may not understand Romberg integration, I'll explain.

Let me get into this a bit to explain. Suppose we compute the integral of some function using trapezoidal rule? For example, a simple function like sin(x), over the interval [0,3*pi]. Yes, I know the integral is 2.

syms X

int(sin(X),[0,3*pi])

But now let me do it using trapezoidal rule. But I'm going to be a little tricky here in what I do. I'l do the integration several times.

fun = @sin;

N = [5 9 17 33 65 129 257]';

IntEst = zeros(size(N));

for i = 1:numel(N)

x = linspace(0,3*pi,N(i));

IntEst(i) = trapz(x,fun(x));

end

[N,3*pi./(N-1),IntEst,2 - IntEst]

You should see that at each step, I used a trapezoidal rule that doubled the number of steps. The second column is the step size used in the trapezoidal rule integration. As you see, it drops by a factor of 2 each time. The last column is the error in the result.

The result of those integrations is seen in the last column, and it does approach 2. We can even plot the error of integration.

loglog(N,2 - IntEst,'o-')

I've used a loglog plot there, since it shows exactly the behavior we need to understand. A trapezoidal rule has error that decreses by the square of the stepsize. So that error is decreasing by a factor of 4 with each increment.

Now, what do we know about the error structure for Trapezoidal rule? It decreases with h^2, wher h is the stepsize. Perhaps it would have been better to show that behavior.

plot(3*pi./(N-1),IntEst,'o-')

This looks like a nice simple quadratic polyynomial. Does this give you an idea? Suppose the unknown value of the integral was V. Then each of those estimates of the integral is of the form:

V + k*h^2

So this should be a quadratic function of the step size. (there is no linear term in that quadratic polynomial.) Now, if I take two of those estimates, and perform a little trickery, we can eliminate the quadratic term. That is, if we have two estimates of the value of this integral, at two different step sizes,

V1 = V + k*h^2

V2 = V + k*(2*h)^2

Multiply the first by 4, then subtract. That will give us

4*V1 - V2 = 3*V

or

V = (4*V2 - V1)/3

Effectively, we are extrapolating that quadraticbehavior in the stepsize down to an effective stepsize of zero.

n = 1:6;

format long g

(4*IntEst(n+1) - IntEst(n))/3

Do you see that this sequence of estimates are all far better estimates than the original trapezoidal rule?

This trick, of using a pair of estimates of an integral, then extrapolated so the stepsize becomes effectively zero is usually called a Richardson extrapolation, but also Romberg was attached to the idea in some forms.

Interestingly enough, you can actually show the result is implicitly a higher order Newton-Cotes rule. Anyway, the above trick of cutting the interrval size in half is a nice one, because we can now adaptively look to see where the integral is converging, and where it is not. Then we only cut the stepsize in half in some parts of the domain of the integration. And, finally, we never really needed to call trapz multiple times in the above integrations, because we could reuse some of those function evaluations.

Anyway, that is the theory of a Romberg integration. The extrapolations down to a stepsize of zero are a very nice solution, something I have used in many different codes.

Finally, does this already exist in MATLAB? Well, yes. Sort of. The OLD function QUAD uses adaptive Richardson extrapolation, as applied to Simpson's rule. In fct, quad is old enough that is is now deprecated. But it effectively does an adaptive Romberg integration.

Now, there is nothing stopping you from using a nested call to quad. And that would be what you asked for. You would need to be careful to make sure the code handles vectorized calls properly. (I did take a quick look at quad2d, and it does not seem to be just a direct nesting of two calls to quad, as you might have hoped for, but it does employ an adaptive scheme based on similar ideas.)

In the end though, IF this is not homework, then JUST USE INTEGRAL2. Don't try to write better code than has already been provided to you, not unless you know enough about what you are doing to already know everything I said above, and far more, including the parts I skipped over. You won't improve on those codes, written by people who DO know what they are doing.

##### 0 Comments

### More Answers (0)

### See Also

### Categories

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!