Inputs and targets have different numbers of samples

15 vues (au cours des 30 derniers jours)
Jason Tan Jia Sheng
Jason Tan Jia Sheng le 30 Déc 2023
clc
clear
% Load the new data
load('Data1.mat');
%% Wavelet Transform:
F = [];
signals = cell(1, 7);
for i = 1:7
matrix = allData{i}; % Take one matrix at a time
for j = 2:6 % Assuming you want to skip the first column
column = matrix(:, j);
[C, L] = wavedec(column, 4, 'haar');
cA4 = appcoef(C, L, 'haar', 4);
cD4 = detcoef(C, L, 4);
cD3 = detcoef(C, L, 3);
cD2 = detcoef(C, L, 2);
cD1 = detcoef(C, L, 1);
A4 = wrcoef('a', C, L, 'haar', 4);
D1 = wrcoef('d', C, L, 'haar', 1);
D2 = wrcoef('d', C, L, 'haar', 2);
D3 = wrcoef('d', C, L, 'haar', 3);
D4 = wrcoef('d', C, L, 'haar', 4);
F = [F, A4, D4, D3, D2, D1];
plot(matrix(:, 1), matrix(:, j))
end
signals{i} = F;
F = [];
end
%% Feature Extraction
y = [];
signals_Feature = cell(1, 7);
for i = 1:7
matrix = signals{i};
for j = 1:5:size(matrix, 2)
t = matrix(:, j:j+4);
y = [y; FeatureExtraction(t)];
end
signals_Feature{i} = y';
y = [];
end
%% Neural Network Input and Output
N_train = 25;
N_Val = 4;
N_test = 7;
[Train, Test, T_train, T_test] = Net_Data(N_train, N_Val, N_test, signals_Feature);
% Separate the train, validation, and test sets
Net_input_train = Train(:, 1:N_train)';
Net_input_val = Train(:, N_train+1:N_train+N_Val)';
Net_input_test = Test';
Net_out_train = T_train(:, 1:N_train)';
Net_out_val = T_train(:, N_train+1:N_train+N_Val)';
Net_out_test = T_test';
% Concatenate the sets for Neural Network training
Net_input = [Net_input_train; Net_input_val; Net_input_test];
Net_out = [Net_out_train; Net_out_val; Net_out_test];
disp(['Number of rows in Net_input: ', num2str(size(Net_input, 1))]);
disp(['Number of rows in Net_out: ', num2str(size(Net_out, 1))]);
% Make sure the number of samples matches
assert(size(Net_input, 1) == size(Net_out, 1), 'Mismatch in the number of samples between input and output data');
net_e_train = [];
net_e_test = [];
net = feedforwardnet([25]);
net.divideFcn = 'divideind';
[net.divideParam.trainInd, net.divideParam.valInd, net.divideParam.testInd] = ...
divideind(2500, 1:7*N_train, 7*N_train+1:7*N_train+7*N_Val, 7*N_train+7*N_Val+1:7*N_train+7*N_Val+N_test);
net = train(net, Net_input, Net_out);
y = net(Net_input);
out = round(y);
net_e_train = [net_e_train, error(out, Net_out, N_train, N_Val)];
net_e_test = [net_e_test, error_t(out, Net_out, N_train, N_Val)];
numb_te_error = sum(sum(abs(out(:, 4:end) - Net_out(4:end, :))));
numb_tr_error = sum(sum(abs(out(:, 1:3) - Net_out(1:3, :))));
this is my code, and
Error using network/train
Inputs and targets have different numbers of samples.
Error in classification3 (line 84)
net = train(net, Net_input, Net_out);
this is error faced, the data read is a 1x7 cell as each cell contain 2500x6 double

Réponses (2)

Sulaymon Eshkabilov
Sulaymon Eshkabilov le 30 Déc 2023
Without seeing your data, a couple of points:
(1) Why you are selecting the data sets for model training, validation and testing in order. It should be at a random order.
E.g.:
% Set the data ration to be used for training, validation, and testing
Train_D = 0.6;
Val_D = 0.2;
Test_D = 0.2;
% Random number generator seed reproducibility (optional)
rng(42);
% Establish random index selection for Train, Test, Validation Data
% partitioning:
N_Data = size(X, 1); % X predictor
[Train_IDX, Val_IDX, Test_IDX] = dividerand(N_Data, Train_D, Val_D, Test_D);
% Data is partitioned using randomly generated indices
% Model TRAINING
X_train = X(Train_IDX, :); % Predictor
y_train = y(train_IDX); % Response
% Validation
X_val = X(Val_IDX, :);
y_val = y(Val_IDX);
% Testing
X_test = X(Test_IDX, :);
y_test = y(test_IDX);
(2) Neural network does not read cell arrays. Therefore, cell array data should be converted into a matrix or table array form.

Hassaan
Hassaan le 30 Déc 2023
I have added some debugging checks and comments to help ensure that the inputs and targets have the same number of samples when training your neural network. Make sure to understand and adjust each part according to the specifics of your data and functions.
clc
clear
% Load the new data
load('Data1.mat');
%% Wavelet Transform:
F = [];
signals = cell(1, 7);
for i = 1:7
matrix = allData{i}; % Take one matrix at a time
for j = 2:6 % Assuming you want to skip the first column
column = matrix(:, j);
[C, L] = wavedec(column, 4, 'haar');
A4 = wrcoef('a', C, L, 'haar', 4);
D1 = wrcoef('d', C, L, 'haar', 1);
D2 = wrcoef('d', C, L, 'haar', 2);
D3 = wrcoef('d', C, L, 'haar', 3);
D4 = wrcoef('d', C, L, 'haar', 4);
F = [F, A4, D4, D3, D2, D1];
end
signals{i} = F;
F = [];
end
%% Feature Extraction
y = [];
signals_Feature = cell(1, 7);
for i = 1:7
matrix = signals{i};
for j = 1:5:size(matrix, 2)
t = matrix(:, j:j+4);
y = [y; FeatureExtraction(t)];
end
signals_Feature{i} = y';
y = [];
end
%% Neural Network Input and Output
% Presumably, Net_Data is a custom function you've defined to prepare data
[Train, Test, T_train, T_test] = Net_Data(25, 4, 7, signals_Feature);
% Separate the train, validation, and test sets
Net_input_train = Train(:, 1:N_train)';
Net_input_val = Train(:, N_train+1:N_train+N_Val)';
Net_input_test = Test';
Net_out_train = T_train(:, 1:N_train)';
Net_out_val = T_train(:, N_train+1:N_train+N_Val)';
Net_out_test = T_test';
% Concatenate the sets for Neural Network training
Net_input = [Net_input_train; Net_input_val; Net_input_test];
Net_out = [Net_out_train; Net_out_val; Net_out_test];
% Display sizes for debugging
disp(['Size of Net_input: ', num2str(size(Net_input))]);
disp(['Size of Net_out: ', num2str(size(Net_out))]);
% Ensure the number of samples matches
assert(size(Net_input, 1) == size(Net_out, 1), 'Mismatch in the number of samples between input and output data');
% Define and train the neural network
net_e_train = [];
net_e_test = [];
net = feedforwardnet([25]);
net.divideFcn = 'divideind';
[net.divideParam.trainInd, net.divideParam.valInd, net.divideParam.testInd] = ...
divideind(36, 1:25, 26:29, 30:36); % Adjust these indices based on your actual data partitioning
% Train the network
net = train(net, Net_input, Net_out);
% Test the network
y = net(Net_input);
out = round(y);
% Calculate errors (Assuming error and error_t are your custom functions)
net_e_train = [net_e_train, error(out, Net_out, N_train, N_Val)];
net_e_test = [net_e_test, error_t(out, Net_out, N_train, N_Val)];
% Calculate and display the number of misclassified samples
numb_te_error = sum(sum(abs(out(:, 4:end) - Net_out(4:end, :))));
numb_tr_error = sum(sum(abs(out(:, 1:3) - Net_out(1:3, :))));
disp(['Number of training errors: ', num2str(numb_tr_error)]);
disp(['Number of test errors: ', num2str(numb_te_error)]);
  • Replace the FeatureExtraction, Net_Data, error, and error_t functions with the actual implementations or ensure they're included in your MATLAB path.
  • Adjust the divideind parameters to match the actual size of your data.
  • Ensure that the Train, Test, T_train, and T_test variables generated by Net_Data are correctly structured and aligned.
  • Before running the full script, especially if computationally intensive, test it with a smaller subset of your data to ensure everything works as expected.
  • This script assumes that the allData variable from 'Data1.mat' is structured as expected. If the issue persists, inspect allData and the outputs at each stage for consistency and correctness.
------------------------------------------------------------------------------------------------------------------------------------------------
If you find the solution helpful and it resolves your issue, it would be greatly appreciated if you could accept the answer. Also, leaving an upvote and a comment are also wonderful ways to provide feedback.
  1 commentaire
Jason Tan Jia Sheng
Jason Tan Jia Sheng le 30 Déc 2023
function f = FeatureExtraction(t)
f=[sum(abs(t(:,1))) sum(abs(t(:,2))) sum(abs(t(:,3))) sum(abs(t(:,4))) sum(abs(t(:,5))) ...
mean(t(:,1))];
this is the feature extraction function
function [Train, Val, Test, T_train, T_val, T_test] = Net_Data(N_train, N_Val, N_test, signals_Feature)
% Extracting training data
Train = [];
for i = 1:min(N_train, numel(signals_Feature))
Train = [Train signals_Feature{i}];
end
disp(['Size of Train: ', num2str(size(Train))]);
% Extracting validation data
Val = [];
for i = N_train+1:min(N_train+N_Val, numel(signals_Feature))
Val = [Val signals_Feature{i}];
end
disp(['Size of Val: ', num2str(size(Val))]);
% Extracting testing data
Test = [];
for i = N_train+N_Val+1:min(N_train+N_Val+N_test, numel(signals_Feature))
Test = [Test signals_Feature{i}];
end
disp(['Size of Test: ', num2str(size(Test))]);
% Assuming you want to create a binary target vector
T_train = zeros(1, size(Train, 2));
T_val = ones(1, size(Val, 2)); % Assuming you have binary classification
T_test = 2 * ones(1, size(Test, 2));
% Displaying sizes of target vectors
disp(['Size of T_train: ', num2str(size(T_train))]);
disp(['Size of T_val: ', num2str(size(T_val))]);
disp(['Size of T_test: ', num2str(size(T_test))]);
% Create Net_out_train using the correct indexing
Net_out_train = T_train(1:N_train);
% Displaying the size of Net_out_train
disp(['Size of Net_out_train: ', num2str(size(Net_out_train))]);
end
and this is the net_data function

Connectez-vous pour commenter.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by