Invalid Transform function defined on data store.

13 vues (au cours des 30 derniers jours)
Hamza Afzal
Hamza Afzal le 4 Fév 2021
Commenté : le 14 Avr 2024
doTrainingAndEval=true;
% LoadingGroundTruth;
data = load('miovehicleDatasetGtruth.mat');
miovehicleDataset=data.data.miovehicleDataset
% splitting dataset into train ,valid and test
rng(0)
shuffledIndices = randperm(height(miovehicleDataset));
idx = floor(0.6 * height(miovehicleDataset));
trainingIdx = 1:idx;
trainingDataTbl = miovehicleDataset(shuffledIndices(trainingIdx),:);
validationIdx = idx+1 : idx + 1 + floor(0.1 * length(shuffledIndices) );
validationDataTbl = miovehicleDataset(shuffledIndices(validationIdx),:);
testIdx = validationIdx(end)+1 : length(shuffledIndices);
testDataTbl = miovehicleDataset(shuffledIndices(testIdx),:);
% Use imageDatastore and boxLabelDatastore to create datastores for loading the image and label data during training and evaluation
imdsTrain=imageDatastore(trainingDataTbl{:,'imageFilename'});
bldsTrain=boxLabelDatastore(trainingDataTbl(:,2:end));
%%%% ALTERNATIVE CODE from line 18-29; optional code for line 17;
% carTbl=trainingDataTbl(:,'car')
% busTbl=trainingDataTbl(:,'bus')
% work_vanTbl=trainingDataTbl(:,'work_van')
% motorcycleTbl=trainingDataTbl(:,'motorcycle')
% bicycleTbl=trainingDataTbl(:,'bicycle')
% pedestrianTbl=trainingDataTbl(:,'pedestrian')
% pickup_truckTbl=trainingDataTbl(:,'pickup_truck')
% articulated_truckTbl=trainingDataTbl(:,'articulated_truck')
% singleunit_truckTbl=trainingDataTbl(:,'singleunit_truck')
% motorized_vehicleTbl=trainingDataTbl(:,'motorized_vehicle')
% nonmotorized_vehicleTbl=trainingDataTbl(:,'nonmotorized_vehicle')
% bldsTrain = boxLabelDatastore(carTbl,busTbl,work_vanTbl,motorcycleTbl,bicycleTbl,pedestrianTbl,pickup_truckTbl,articulated_truckTbl,singleunit_truckTbl,motorized_vehicleTbl,nonmotorized_vehicleTbl)
imdsValidation = imageDatastore(validationDataTbl{:,'imageFilename'});
bldsValidation = boxLabelDatastore(validationDataTbl(:,2:end));
imdsTest = imageDatastore(testDataTbl{:,'imageFilename'});
bldsTest = boxLabelDatastore(testDataTbl(:,2:end));
%%% Combine image and box label datastores
trainingData = combine(imdsTrain,bldsTrain);
validationData = combine(imdsValidation,bldsValidation);
testData = combine(imdsTest,bldsTest);
inputSize = [224 224 3];
% Probably error because of codeline 41
preprocessedTrainingData = transform(trainingData, @(data)preprocessData(data,inputSize));
numAnchors = 3;
anchorBoxes = estimateAnchorBoxes(preprocessedTrainingData,numAnchors)
featureExtractionNetwork = resnet50;
featureLayer = 'activation_40_relu';
numClasses = width(miovehicleDataset)-1;
lgraph = fasterRCNNLayers(inputSize,numClasses,anchorBoxes,featureExtractionNetwork,featureLayer);
augmentedTrainingData = transform(trainingData,@augmentData);
augmentedData = cell(4,1);
for k = 1:4
data = read(augmentedTrainingData);
augmentedData{k} = insertShape(data{1},'Rectangle',data{2});
reset(augmentedTrainingData);
end
trainingData = transform(augmentedTrainingData,@(data)preprocessData(data,inputSize));
validationData = transform(validationData,@(data)preprocessData(data,inputSize));
options = trainingOptions('sgdm',...
'MaxEpochs',10,...
'MiniBatchSize',2,...
'InitialLearnRate',1e-3,...
'CheckpointPath',tempdir,...
'ValidationData',validationData);
pretrained=load('rcnnresnet50dectrvehicleexample.mat');
detector1= pretrained.detector1;
[detector1, info] = trainFasterRCNNObjectDetector(trainingData,lgraph,options, ...
'NegativeOverlapRange',[0 0.3], ...
'PositiveOverlapRange',[0.6 1]);
faster_rcnn_detector_miovehicleDataset= detector1
save('faster_rcnn_detector_miovehicleDataset')
I = imread(testDataTbl.imageFilename{1});
I = imresize(I,inputSize(1:2));
[bboxes,scores] = detect(detector1,I);
I = insertObjectAnnotation(I,'rectangle',bboxes,scores);
figure
imshow(I)
function data = augmentData(data)
% Randomly flip images and bounding boxes horizontally.
tform = randomAffine2d('XReflection',true);
rout = affineOutputView(size(data{1}),tform);
data{1} = imwarp(data{1},tform,'OutputView',rout);
data{2} = bboxwarp(data{2},tform,rout);
end
function data = preprocessData(data,targetSize)
% Resize image and bounding boxes to targetSize.
scale = targetSize(1:2)./size(data{1},[1 2]);
data{1} = imresize(data{1},targetSize(1:2));
data{2} = bboxresize(data{2},scale);
end
Invalid transform function defined on datastore.
The cause of the error was:
Error using vision.internal.cnn.validation.checkTrainingBoxes (line 12)
Training data from a read of the input datastore contains invalid bounding boxes. Bounding boxes must be
non-empty, fully contained within their associated image and must have positive width and height. Use datastore
transform method and remove invalid bounding boxes.
Error in vision.internal.cnn.fastrcnn.validateImagesAndBoxesTransform (line 20)
boxes = vision.internal.cnn.validation.checkTrainingBoxes(images, boxes);
Error in
trainFasterRCNNObjectDetector>@(data)vision.internal.cnn.fastrcnn.validateImagesAndBoxesTransform(data,params.ColorPreprocessing)
(line 1667)
transformFcn =
@(data)vision.internal.cnn.fastrcnn.validateImagesAndBoxesTransform(data,params.ColorPreprocessing);
Error in matlab.io.datastore.TransformedDatastore/applyTransforms (line 473)
data = ds.Transforms{ii}(data);
Error in matlab.io.datastore.TransformedDatastore/read (line 162)
[data, info] = ds.applyTransforms(data, info);
Error in vision.internal.cnn.rcnnDatasetStatistics>readThroughAndGetInformation (line 72)
batch = read(datastore);
Error in vision.internal.cnn.rcnnDatasetStatistics (line 29)
out = readThroughAndGetInformation(datastore, params, layerGraph);
Error in trainFasterRCNNObjectDetector>iCollectImageInfo (line 1674)
imageInfo = vision.internal.cnn.rcnnDatasetStatistics(trainingData, rpnLayerGraph, imageInfoParams);
Error in trainFasterRCNNObjectDetector (line 427)
[imageInfo,trainingData,options] = iCollectImageInfo(trainingData, fastRCNN, iRPNParamsEndToEnd(params),
params, options);
Error in fasterrcnnnetworkdetectorcode (line 65)
[detector1, info] = trainFasterRCNNObjectDetector(trainingData,lgraph,options, ...
there is also a pretrained detector loaded in the code; as it is of more than 200MB, so i can not share it here. Please guide me of the error, I think error is comming because of empty bounding boxes. as there are 11 classes, in 200 vehicle dataset. Not all images have all 11 classes in it. So bounding boxes for that is [].
  2 commentaires
Zheng Yuan
Zheng Yuan le 24 Avr 2021
hallo, i am faced with the same problem now when training my YOLOv2 model.
Have you already solved it?
Thank you
璐
le 9 Avr 2024
me too!

Connectez-vous pour commenter.

Réponses (1)

T.Nikhil kumar
T.Nikhil kumar le 11 Avr 2024
Hello Hamza,
It appears that you are facing an invalid bounding box error while training a Faster RCNN network with your custom dataset. I have seen your training data datable and there are indeed some empty bounding boxes for few images.
When training an object detector like Faster R-CNN, each image used for training must have associated bounding boxes that are valid i.e. the values of the bounding boxes are expected to be finite, positive, non-fractional, non-NaN and should be within the image boundary with a positive height and width.
Images with empty bounding boxes ([]) for any class will cause the training process to throw an error. To resolve the error, all the invalid instances of bounding boxes in the dataset need to be deleted or modified to valid values.
You can manually filter out rows in your table that contain empty bounding boxes before creating the ‘boxLabelDatastore’. You can also use the below mentioned code snippet to validate the training data table:
% Filter out rows with all empty bounding boxes
validRows = false(height(miovehicleDataset), 1);
for i = 1:height(miovehicleDataset)
isValid = false;
for j = 2:width(miovehicleDataset) % Assuming first column is imageFilename
if ~isempty(miovehicleDataset{i, j}{1})
isValid = true;
break;
end
end
validRows(i) = isValid;
end
filteredDataset = miovehicleDataset(validRows, :);
You must also ensure that the preprocessing and augmentation functions do not make the bounding boxes invalid in later stages. For example, after augmentation (like flipping), bounding boxes should still be within the image boundaries and have a positive area.
Regards,
Nikhil
  1 commentaire
影
le 14 Avr 2024
你好,经过测试,仍然出现了同样的问题。通过检查,发现并没有无效数据被筛出,似乎数据并没有问题

Connectez-vous pour commenter.

Catégories

En savoir plus sur Recognition, Object Detection, and Semantic Segmentation dans Help Center et File Exchange

Produits


Version

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by