Value to differentiate must be a traced dlarray scalar.

3 vues (au cours des 30 derniers jours)
Fisehatsion Mesfin
Fisehatsion Mesfin le 5 Août 2022
Réponse apportée : Parag le 30 Juin 2025
function [gradients,loss]= ModelD(k,M,Validate_train,params,NN_layer)
%% Attention
Attentionweight = stripdims(squeeze(params.attention.weight)); % Calculate the score weight
weight_out = stripdims(squeeze(params.attention.output_weight));
bias = stripdims(squeeze(params.attention.bias));
Validate_train =(Validate_train(:,:));
validate_data_in = Validate_train(randperm(size(Validate_train,1)),:);
Validate_train_x = validate_data_in (:,1:3);
Validate_train_y = validate_data_in (:,4:end);
A_zero= zeros(size(Validate_train_y,1),1);
Validate_train_y = [Validate_train_y, A_zero];
Validate_data_x = [];
for i =1:k
for j= 1:NN_layer
Validate_data_x (i,j) = Validate_train_x(j);
Validate_train_x(j) = Validate_train_x(j+3);
end
end
y_in = Validate_train_y(1:M,:);
Index =randi([1,M],1,1);
X_in = Validate_data_x(Index,:);
Y_in = repmat(y_in(Index,:),11);
for i= 1:NN_layer
h = X_in(i);
ht = Y_in(1,i);
A = (Attentionweight(i)).*h;
B = (weight_out)*ht;
C = (bias(i));
score(i) = tanh( A + B + C) ;
end
score =score';
score = dlarray(score,'CB');
a = softmax(score);
Vt = [];
for i = 1:NN_layer
AA = a(i)* X_in(i);
Vt = [Vt AA];
end
Vt = dlarray(Vt,'CB');
loss = mse(Vt,X_in);
gradients = dlgradient(loss,params);
end

Réponses (1)

Parag
Parag le 30 Juin 2025
The error "Value to differentiate must be a traced dlarray scalar" occurs because:
  1. Missing dlfevalContext
  • dlgradient requires the computation to be traced (tracked for automatic differentiation).
  • In the code, ModelD was called directly instead of inside dlfeval, so MATLAB could not trace the operations.
There are some other issues with code as well:
  1. Non-Scalar Loss
  • The loss must be a scalar for gradient computation.
  • The original mse(Vt, X_in) might return a non-scalar (e.g., vector/matrix) if Vt and X_inare not properly reduced.
  1. Improper dlarray Handling
  • Some operations (like repmat, indexing) were breaking the computation graph, preventing gradient tracing.
Please refer to this MATLAB Code with execution on dummy input:
function [gradients, loss] = ModelD(k, M, Validate_train, params, NN_layer)
params.attention.weight = dlarray(params.attention.weight);
params.attention.output_weight = dlarray(params.attention.output_weight);
params.attention.bias = dlarray(params.attention.bias);
Validate_train = dlarray(Validate_train);
validate_data_in = Validate_train(randperm(size(Validate_train,1)),:);
Validate_train_x = validate_data_in(:,1:3);
Validate_train_y = validate_data_in(:,4:end);
A_zero = dlarray(zeros(size(Validate_train_y,1),1));
Validate_train_y = [Validate_train_y, A_zero];
Validate_data_x = dlarray(zeros(k, NN_layer));
for i = 1:k
for j = 1:NN_layer
Validate_data_x(i,j) = Validate_train_x(j);
Validate_train_x(j) = Validate_train_x(j+3);
end
end
y_in = Validate_train_y(1:M,:);
Index = randi([1,M],1,1);
X_in = Validate_data_x(Index,:);
Y_in = repmat(y_in(Index,:), 1, NN_layer); % Fixed repmat dimensions
score = dlarray(zeros(NN_layer,1), 'CB');
for i = 1:NN_layer
h = X_in(i);
ht = Y_in(1,i);
A = params.attention.weight(i) .* h;
B = params.attention.output_weight * ht;
C = params.attention.bias(i);
score(i) = tanh(A + B + C);
end
a = softmax(score);
Vt = dlarray(zeros(1,NN_layer), 'CB');
for i = 1:NN_layer
Vt(i) = a(i) * X_in(i);
end
loss = sum((Vt - X_in).^2, 'all');
gradients = dlgradient(loss, params);
end
k = 5;
M = 10;
NN_layer = 3;
Validate_train = rand(100, 7);
params.attention.weight = rand(NN_layer, 1);
params.attention.output_weight = rand(1, 1);
params.attention.bias = rand(NN_layer, 1);
params.attention.weight = dlarray(params.attention.weight);
params.attention.output_weight = dlarray(params.attention.output_weight);
params.attention.bias = dlarray(params.attention.bias);
[gradients, loss] = dlfeval(@ModelD, k, M, Validate_train, params, NN_layer);
disp(loss);
disp(gradients);
Hope this helps resolve the issue!

Catégories

En savoir plus sur Deep Learning Toolbox dans Help Center et File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by