Normality test of the Matlab function 'randn'

152 vues (au cours des 30 derniers jours)
MahdiH
MahdiH le 22 Juin 2021
Commenté : Peter Perkins le 25 Juin 2021
Hi Matlab community,
I have generated 2000 samples using Matlab function 'randn', and was trying to test it for normality. The h-value return 0 which is expected, however, the p-value is too high! What is going wrong here? Please see the below simple example:
n = randn(1,2000);
[h,p] = lillietest(n)
h =
0
p =
0.4115 %%% Why it is too high?
[h,p] = kstest(n)
h =
logical
0
p =
0.8158 % Same here!

Réponse acceptée

Peter Perkins
Peter Perkins le 24 Juin 2021
Modifié(e) : Peter Perkins le 25 Juin 2021
With all due respect (and I mean that), nonsense.
MadhiH, I don't know what you mean when you say that, "the p-value is too high!" The p-value for a test of normality applied to a sample drawn from the null hypothesis (a normal distribution) has a U(0,1) distribution. That's literally the definition of a p-value. With repeated sampling you will see p-values from less than 0.05 (5% of the time you get a Type I error if alpha is 0.05, right?) up to larger than 0.95 (also 5% of the time). Run this code
n = 10000;
p = zeros(n,1);
for i = 1:n
x = randn(1000,1);
[~,p(i)] = kstest(x);
i
end
cdfplot(p)
to demonstrate that, using this empirical CDF plot of the p-values:
That's a U(0,1) empirical dist'n in anyone's book. I used kstest, because I know the null hypothesis, I don't have to estimate its parameters. Also because lillietest truncates the p-value to 0.5 if larger than that, because that's how high its tabulated values go (which are not, by the way, Lilliefohrs' tabulated values, computing power has increased rather a lot since the 60's, so we used larger Monte-Carlo simulations than he did). There's not a whole lot of point in caring about the exact p-value if you already know that it's larger than 0.5, you are going to fail to reject the null under any reasonable significance level. But if you want p-values all the way up to 1, you can tell lillietest to compute a Monte-Carlo p-value; it's slower.
dpb, you've shown a probability plot for one sample. Drawing conclusions from that is a little like saying that this
>> randn
ans =
-0.96774
is wrong because the normal dist'n should have mean zero. I know you know that: "...for this particular instantiation." The entire sample is (pseudo)random; the extreme tails are especially noisy because they are by definition a small sample. You can expect the tails in any given prob plot to skew high or low. Run this code
for i = 1:16
subplot(4,4,i);
probplot(randn(10000,1))
end
to get something like this plot:
Some of the time the extreme left tail is too light with respect to the theoretical dist'n, sometimes too heavy. Same for the right. Nothing wrong there. The K-S test with alpha=0.05 rejects normality 5% of the time, right on target. So would Lilliefohrs', so would A-D (which IIRC is more sensitive to departure in the tails).
The generator choices in MATLAB are well documented, included references to the literature (bottom of that page). There is a vast amount of literature discussing the various uniform generators and normal transformations that MATLAB offers, and these algorithms have been really heavily tested in that literature (but do note the caveat about not using the old ones: "The generators mcg16807, shr3cong, and swb2712 provide for backwards compatibility with earlier versions of MATLAB"). The default generator, the "vanilla" Mersenne Twister, is not known to have flaws other than linear complexity (and it's not suitable for cryptographic applications), but there are other choices of algorithms if that one doesn't suit your fancy. You can also switch from the Ziggurat algorithm used by default for the normal transformation; inversion is slower but often recommended for better behavior in the extreme tails.
  3 commentaires
Paul
Paul le 25 Juin 2021
What does "... better behavior in the extreme tails." mean?
Peter Perkins
Peter Perkins le 25 Juin 2021
I knew someone would ask that!
I'm a little rusty on this. IIRC, the inversion method is able to generate very extreme values way far out in the tails (maybe just the upper tail?), ones that ziggurat can never generate because of floating point limitations of the algorithm. Apologies, I forget the exact details. There are papers out there comparing normal generation, and Devroye's book probably covers it.

Connectez-vous pour commenter.

Plus de réponses (2)

dpb
dpb le 22 Juin 2021
Modifié(e) : dpb le 22 Juin 2021
I always use the Wilks-Shapiro test(*) and have little experience with Lillifors test, but some trials with various N have convinced me the MATLAB PRNG for normal isn't much good...especially in the tails. Without more digging than had time for right now, it's not clear which generator it is using by default to get a normal; it says it uses the Mersene Twister for the default uniform stream; it doesn't easily relate to the actual normal generation algorithm that I found quickly, anyways.
I increased the sample size to 10K and it's obvious even from just a histogram the distribution is skewed in the tails...
One observes a lot of white under the theoretical on the left, not nearly so much on the right...for this particular instantiation.
Looking at the probability plot it's a lot more obvious, but the tails caught my eye just from the observed histogram w/o the pdf overlaid on it. As John d' Errico noted just a week or so ago in reply to a question raised about similar observation of unusual result from a chi-square test, "when you have a lot of data, it's far easier to reject the hypothesis" because then the data have to be really, really normal. These aren't quite enough to actually reject the hypothesis, but they're not all that close -- and, in fact, the MATLAB implementation just caps the return p value at 0.5 in this case.
This really shows up the shortness of observation in the LH tails region and, to a lesser extent a slight overage in the right. But, the deviation really begins to show up at about the 0.5% tails regions, not just in the extreme tails as the bulk of the asterisks by that point are above/below the line.
(*) A paper on power of various tests for normality I was made aware of via my past reactor background and interaction with NRC is
  3 commentaires
MahdiH
MahdiH le 23 Juin 2021
@Paul Thanks for your feedback. I have tried larger sample size, it enhanced the p-value, but still considered way higher than the anticipated value.
Peter Perkins
Peter Perkins le 24 Juin 2021
Madhi, as I said in my reply below, your expectations of what value the p-value "should have" seem misguided. Expect to see p-values uniformly dist'd between 0 and 1 when drawing from the null hypothesis.

Connectez-vous pour commenter.


MahdiH
MahdiH le 22 Juin 2021
Dear @dpb,
Appreciate your response. After reading your reply, I did more experminets and it seems even Wilks-Shapiro test does not provide a convincing p-value. It is really disappointing to fail proving that Matlab 'randn' is following "with significance" normal distribution.
I have used Wilks-Shapiro test SW test to check the normality of the randn, and the results are as follows:
n=randn(1,1000);
[H, pValue, W] = swtest(n, 0.05)
H =
logical
0
pValue =
0.2950
W =
0.9981
I guess my question now: is there any other way, other than randn, to generate random variable that follows normal distribution?
  3 commentaires
dpb
dpb le 23 Juin 2021
Mayhaps John d' Errico will stumble across this thread -- he's a far better mathematician than I and while retired is quite a bit more recently active than I.
He probably would have some salient input...
Peter Perkins
Peter Perkins le 24 Juin 2021
Depending on when way back actually was, IMSL on mainframes likely used a multiplicative congruential generator; I don't think we want to go back to those days.

Connectez-vous pour commenter.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by