How to filter data on GPU before computing loss?

2 vues (au cours des 30 derniers jours)
Frederik Rentzsch
Frederik Rentzsch le 5 Mar 2024
Hey, I would like to create a custom RegressionLayer where in the forwardLoss-method the data, i.e. Y and T gets filtered using a high-pass filter, e.g. . In standard matlab, it would be conv(Y,[1 -.85]) but this doesn't work on the GPU. I found dlconv but it isn't clear to me from the documentation if I can use it here and if yes, how to make it work.
classdef maeRegressionLayer < nnet.layer.RegressionLayer ...
& nnet.layer.Acceleratable
% Example custom regression layer with mean-absolute-error loss.
methods
function layer = maeRegressionLayer(name)
% layer = maeRegressionLayer(name) creates a
% mean-absolute-error regression layer and specifies the layer
% name.
% Set layer name.
layer.Name = name;
% Set layer description.
layer.Description = 'Mean absolute error';
end
function loss = forwardLoss(layer, Y, T)
% loss = forwardLoss(layer, Y, T) returns the MAE loss between
% the predictions Y and the training targets T.
% ##### Here I would like to filter Y and T before going further
Y_filtered = ....
T_filtered = ....
% Calculate MAE.
R = size(Y,3);
meanAbsoluteError = sum(abs(Y-T),3)/R;
% Take mean over mini-batch.
N = size(Y,4);
loss = sum(meanAbsoluteError)/N;
end
end
end

Réponses (1)

UDAYA PEDDIRAJU
UDAYA PEDDIRAJU le 14 Mar 2024
Hi Frederik,
First, ensure your data (Y and T) are dlarray objects, which are designed for deep learning operations and can be used on the GPU. If they are not already "dlarray" objects, you can convert them using the "dlarray" function.
For the high-pass filter, you can indeed use "dlconv"(https://www.mathworks.com/help/deeplearning/ref/dlarray.dlconv.html), but you need to carefully set up the filter weights and other parameters to mimic the high-pass filter operation "conv(Y, [1, -0.85])". The "dlconv" function is intended for convolutional operations in neural networks, so you would treat your filter coefficients as the weights of a convolutional filter.
I have modified the code snippet attached, you can insert the reuired parameters in this.
classdef maeRegressionLayer < nnet.layer.RegressionLayer ...
& nnet.layer.Acceleratable
% Example custom regression layer with mean-absolute-error loss.
methods
function layer = maeRegressionLayer(name)
% Set layer name.
layer.Name = name;
% Set layer description.
layer.Description = 'Mean absolute error with high-pass filtering';
end
function loss = forwardLoss(layer, Y, T)
% Convert Y and T to dlarray if they aren't already.
if ~isa(Y, 'dlarray')
Y = dlarray(Y);
end
if ~isa(T, 'dlarray')
T = dlarray(T);
end
% Assume Y and T are 4-D arrays with the format 'SSCB' (spatial, spatial, channel, batch).
% You might need to adjust this depending on your actual data format.
% Define high-pass filter coefficients.
filterCoeffs = dlarray([1, -0.85]);
filterSize = [1, 2]; % Size of the high-pass filter.
% Apply the high-pass filter to Y and T.
% Padding and stride are set to ensure the output size matches the input size.
Y_filtered = dlconv(Y, filterCoeffs, [], 'Padding', 'same', 'Stride', 1);
T_filtered = dlconv(T, filterCoeffs, [], 'Padding', 'same', 'Stride', 1);
% Calculate MAE with filtered data.
R = size(Y_filtered,3);
meanAbsoluteError = sum(abs(Y_filtered - T_filtered),3) / R;
% Take mean over mini-batch.
N = size(Y_filtered,4);
loss = sum(meanAbsoluteError) / N;
end
end
end
This should help you to figure out the issue you're facing.

Produits


Version

R2023a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by