How do I determine if a matrix is nilpotent using matlab?

I tried using matrix manipulation to determine x which will determine whether A is nilpotent by using the following code:
A =[0 1 2 3 4 5 6 7;0 0 8 9 10 11 12 13;0 0 0 1 2 3 4 5; 0 0 0 0 6 7 8 9;0 0 0 0 0 1 2 3;0 0 0 0 0 0 4 5; 0 0 0 0 0 0 0 1;0 0 0 0 0 0 0 0];
B_A = zeros(8);
syms x
eqn = A.^x == B_A
solx = solve(eqn,x)
But I keep getting solx = Empty sym: 0-by-1
How do I get a solid solution for x? Or any other method to calculate the nilpotent will be helpful.

 Réponse acceptée

Torsten
Torsten le 10 Avr 2017
Your equation is wrong ; it must read
eqn = A^x==B_A
But you should not check for nilpotency this way.
In your case, it's obvious by inspection that A is nilpotent since it is upper triangular.
For the general case, I'd check whether A has only 0 as eigenvalue :
help eig
Or - provided n is small - just calculate A^n for an (nxn)-matrix A and see whether it's the null matrix.
Best wishes
Torsten.

11 commentaires

Amy Olivier
Amy Olivier le 10 Avr 2017
Modifié(e) : Amy Olivier le 10 Avr 2017
I rewrote the code using a series of for loops and a while loop. It works, but only if the n is not greater than a specified number in the code.
Thanks for the help
The first x for which A^x=0 cannot exceed n. So just test whether A^n = 0.
Best wishes
Torsten.
"For the general case, I'd check whether A has only 0 as eigenvalue"
Suppose we have the following matrix
z = sqrt(sym(2));
z1 = sqrt(sym(3));
AA = [-z, -z^2/z1; z1, z]
AA = 
It is nilpotent and, as required, it has only zero as a repeated eigenvalue
AA*AA
ans = 
eig(AA)
ans = 
If we enter that matrix as a double
format short e
z = sqrt((2));
z1 = sqrt((3));
AAA = [-z, -z^2/z1; z1, z]
AAA = 2x2
-1.4142e+00 -1.1547e+00 1.7321e+00 1.4142e+00
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
it is, perhaps surprisingly, nilpotent in double precision
AAA*AAA
ans = 2x2
0 0 0 0
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
any(ans(:))
ans = logical
0
But, its eigenvalues seem to be quite large. I mean, I'm not surprised the eigenvalues aren't exactly zero, but I was surprised they are on the order of 1e-8.
eig(AAA)
ans =
2.5025e-17 + 2.0134e-08i 2.5025e-17 - 2.0134e-08i
This example more inline with what I was expecting
z = 2;
z1 = 3;
AAA = [-z, -z^2/z1; z1, z]
AAA = 2x2
-2.0000e+00 -1.3333e+00 3.0000e+00 2.0000e+00
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
AAA*AAA
ans = 2x2
0 0 0 0
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
eig(AAA)
ans = 2x1
1.0e+00 * 2.2204e-16 2.2204e-16
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
In general, I'm curious about how large the eigenvalues can get, perhaps if the martrix itself is larger and/or if its index is higher. I'm also wondering if that first version of AAA has some property, or is member of a class of nilpotent matrices that share a common property, that will have a sensitive eig computation.
Any insights into any of this?
I don't think this is a question concerning nilpotency, but a more general question on how exact eigenvalues can be computed. Using a different algorithm gives a better result in this special case:
z = sqrt((2));
z1 = sqrt((3));
AAA = [-z, -z^2/z1; z1, z];
eig(AAA,eye(2),'qz')
ans = 2x1
1.0e-16 * -0.3392 -0.7022
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
z = 2;
z1 = 3;
AAA = [-z, -z^2/z1; z1, z];
eig(AAA,eye(2),'qz')
ans = 2x1
1.0e-15 * -0.2444 0
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
For sure, this is a question about computing eigenvalues.
Because AAA is not symmetric, the generalized eigenvalue solution will always use the qz algorithm, so it's really a question of what's the difference between eig(A) and eig(A,eye,alg). Is solving the former fundamentally different than the latter in some way? Is the latter always expected to be more accurate for matrices that have any repeated eigenvalues, or all repeated eigenvalues, or all repeated eigenvalues at 0?
z = sqrt((2));
z1 = sqrt((3));
AAA = [-z, -z^2/z1; z1, z];
e1 = eig(AAA)
e1 =
1.0e-07 * 0.0000 + 0.2013i 0.0000 - 0.2013i
e2 = eig(AAA,eye(2),'chol') % still uses qz
e2 = 2x1
1.0e-16 * -0.3392 -0.7022
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
e3 = eig(AAA,eye(2),'qz')
e3 = 2x1
1.0e-16 * -0.3392 -0.7022
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
isequal(e2,e3)
ans = logical
1
Torsten
Torsten le 13 Avr 2024
Modifié(e) : Torsten le 13 Avr 2024
Is solving the former fundamentally different than the latter in some way? Is the latter always expected to be more accurate for matrices that have any repeated eigenvalues, or all repeated eigenvalues, or all repeated eigenvalues at 0?
Sorry, but I cannot answer this. Maybe Christine Tobler or Bruno Luong have experience with the two possible algorithms and their advantages/disadvantages for problems with repeated eigenvalues.
Another method to check 0 is eigen value of A is
A =[0 1 2 3 4 5 6 7;0 0 8 9 10 11 12 13;0 0 0 1 2 3 4 5; 0 0 0 0 6 7 8 9;0 0 0 0 0 1 2 3;0 0 0 0 0 0 4 5; 0 0 0 0 0 0 0 1;0 0 0 0 0 0 0 0];
isnilpotetent = ~isempty(null(A))
isnilpotetent = logical
1
It is perhaps more "scalable" the EIG and friends
You had to check that 0 is the only eigenvalue A has ...
Oh yeah good catch @Torsten.
In absolute my claim is not wrong I ener say "to check A is nilpotent". ;)
Here is another way to find the degree of Nilpotet matrix, ie smallest x such that A^x = 0
A =[0 1 2 3 4 5 6 7;
0 0 8 9 10 11 12 13;
0 0 0 1 2 3 4 5;
0 0 0 0 6 7 8 9;
0 0 0 0 0 1 2 3;
0 0 0 0 0 0 4 5;
0 0 0 0 0 0 0 1;
0 0 0 0 0 0 0 0];
J = jordan(A); % required symbolic toolbox
d0 = diag(J);
isnilpotent = all(d0 == 0);
if isnilpotent
d1 = diag(J,1); %
% length of the longest sequence of nonzero above the diagonal
% == jordan block size
j = diff([0; d1~=0; 0]); % positions where transition to/from 0 occur
l = diff(find(j)); % length of consecutive non-zeros
maxl = max(l);
x = sum([maxl 1]); % it return 1 if l/maxl is empty, add 1 otherwise
else
x = [];
end
x
x = 8
It looks to me the EIG methods (both standarc and generalized) is difficult to be used for middle/large size nilpotetnt matrix. They seem to have difficulty to estimate 0 eigenvalues with huge errors.
Multiple order eigen values estimation is challenging to handle numerically.
n = 100; % 1000
A = diag(ones(1,n-1), 1);
[P,~]=qr(randn(size(A)));
B=P*A*P';
% B should be nimpotetnt, this is small
max(abs(B^n), [], "all")/norm(B)
ans = 1.7070e-15
% check eigen value of A
lambdaA = eig(A,eye(n),'qz','vector');
abs(lambdaA) % ok they are all 0Z
ans = 100x1
0 0 0 0 0 0 0 0 0 0
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
norm(lambdaA,'inf')
ans = 0
% Do the same for B
% check eigen value of B, orthogonal transform of A
lambdaB = eig(B,eye(n),'qz','vector');
sort(abs(lambdaB)) % However none of them is close to 0!!!
ans = 100x1
0.4155 0.4155 0.5856 0.6826 0.6826 0.6879 0.6879 0.6891 0.6891 0.6922
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
norm(lambdaB,'inf')
ans = 0.7013

Connectez-vous pour commenter.

Plus de réponses (0)

Catégories

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by