Effacer les filtres
Effacer les filtres

i am dealing with gradieent descent based minimization problem , my matrix is heavily ill conditioned , leading to false recovery in forward inverse problem

1 vue (au cours des 30 derniers jours)
clc;
clear all;
close all;
%% greens function between source to target
soux=[3,3,4,4,5,5]; % source x coordinate
souy=[3,4,3,4,3,4]; % source y coordinate
w2=zeros(6,6);
for k=1:6
hg=1;
for i=3:5 %target x coordinate
for j=3:4 %target y coordinate
p=soux(k);
q=souy(k);
temp1=(i-p)*(i-p)+(j-q)*(j-q)+(0.2-0)^2 ;
dt1=sqrt(temp1);
%kg=9.9260
kg=0.3897;
grf2(hg,:)=(exp(-(kg*dt1)))/(2.0673*dt1);
hg=hg+1;
end
end
w2(:,k)=grf2;
end
G=w2
%% greens function between target to detector
a=3
b=4
c=5
dtx=[a,a,b,b,c,c];
dty=[a,b,a,b,a,b];
% dtx=[3,3,4,4,5,5]; % detector x coordinate
% dty=[3,4,3,4,3,4]; % detector y coordinate
w1=zeros(6,6);
% greens function between targets to detectors
t=1;
for i=3:5 %target x coordinate
for j=3:4 %target y coordinate
for k=1:6
g=dtx(k);
m=dty(k);
temp=(i-g)*(i-g)+(j-m)*(j-m)+(0.4-0.2)^2;% euclidean distance
dt=sqrt(temp);
kk=0.3897;% this kk is in centimeters not in millimeters and 20.422 is in centimeters
grf1(k,:)=(exp(-(kk*dt)))/(2.0673*dt);% greens function computation from target to detectors
end
w1(:,t)=grf1;% output of the sensing matrix
% w2=normc(w1);
t=t+1;
%
end
%
end
H=w1;
%%
I = eye(6,6);
f =[1 1 0 0 0 0;
1 1 0 0 0 0;
0 0 1 1 0 0;
0 0 1 1 0 0;
0 0 0 0 0 0;
0 0 0 0 0 0]
%%
A = I - G*diag(f);
% here we are trying to minimize u and tring to find values for u
% A = I - G*diag(f)
%u_in = ones(6,6);
u_ini = zeros(6,6)
u_in=ones(6,6)
% u_in=[1.995 0.997 0.955 0.996 0.988 0.992;
% 0.978 1.985 0.956 0.968 0.9823 0.934;
% 0.987 0.999 1.991 0.991 0.956 0.925;
% 0.965 0.989 0.963 1.989 0.993 0.973;
% 0.968 0.934 0.982 0.996 1.945 0.924;
% 0.988 0.989 0.954 0.9786 0.969 1.991]
% imagesc(u_in)
% colormap(gray)
%% gradient and objective function
funval =@(u) 0.5 * ((norm(A*u - u_in))^2); % computing value of objective function
gradu = @(u) [A'*(A*u - u_in)] % computing the gradient for the trikhonov case
%u_ini = ones(6,6);
u_size =size(u_ini);
% A_size=size(A);
iterations =200;
stepsize = 0.001;
u_next = u_ini;
u_rec = [u_ini] ; % to record the u iterative value
ff = zeros(iterations,1)
for i = 1 : iterations
u_next = u_next - stepsize * gradu(u_next);
recordedguess_u = [u_rec , u_next ];
ff(i,1) =funval(u_next);
ii(i,1)=i;
end
% rec = reshape(u_next, u_size);
u_tilda=u_next;
funval(u_tilda)
figure()
plot(ii,ff)
%display(u_tilda);
%C = u -u_next
%% calculating filed at detector
Y = (H*diag(u_tilda)).*f;
%Y = abs(Y)
figure()
imagesc(Y)
colormap(gray)
pp=H*diag(u_tilda)
pp_inv = pinv(pp)
fff = pp_inv.* Y
figure()
imagesc(f)
colormap(gray)
figure()
imagesc(fff)
colormap(gray)
  2 commentaires
Matt J
Matt J le 6 Fév 2022
Modifié(e) : Matt J le 6 Fév 2022
It's not clear what your question is. If your matrix is ill-conditioned, bad results from steepest descent are to be expected.
asim asrar
asim asrar le 7 Fév 2022
thank you matt for the response , can you suggest how such problems can be irradicated.

Connectez-vous pour commenter.

Réponse acceptée

Matt J
Matt J le 10 Fév 2022
Try Newton's method instead of steepest descent. If you have the Optimization Toolbox, fminunc() basically has Newton-like method's already impleneted for you.

Plus de réponses (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by