The onnx model exported by exportONNXNetwork() is not the same as the result of running in opencv and Matlab?

5 vues (au cours des 30 derniers jours)
For example, I use the pre-training model googlenet to classify images, use the official example to test in OpenCV4.1, and identify "peppers.png", the recognition result is not bell pepper.No matter how I set the input image mean, normalization, etc., it always fails.
My matlab program is:
net = googlenet;
exportONNXNetwork(net,'mygoogleNet.onnx','OpsetVersion',9); // or 6,7,8
My OpenCV program is as follows,"synset_words.txt" is in the attachment:
void main()
{
Mat img = imread("C:\\Program Files\\MATLAB\\R2019a\\examples\\deeplearning_shared\\peppers.png");
String onnx_path = "mygoogleNet.onnx"; // this is matlab googlenet export onnx file;
std::string file = "synset_words.txt";
vector<string> classes;
std::ifstream ifs(file.c_str());
if (!ifs.is_open())
CV_Error(Error::StsError, "File " + file + " not found");
std::string line;
while (std::getline(ifs, line))
{
classes.push_back(line);
}
// read net
Net net = readNetFromONNX(onnx_path);
if (net.empty())
{
cout << "net is empty!" << endl;
}
net.setPreferableBackend(DNN_BACKEND_OPENCV);
net.setPreferableTarget(DNN_TARGET_CPU);
int net_size = 224;// googlenet net input size
img = img(Rect(0, 0, net_size, net_size)); // keep the same image in matlab
while (true)
{
Mat image = img.clone();
Mat blob;
blobFromImage(image, blob, 1.0/255, Size(net_size, net_size), Scalar(122.6789, 116.6686, 104.0069),true); // set params
//! [Set input blob]
net.setInput(blob);
Mat prob = net.forward();
Point classIdPoint;
double confidence;
minMaxLoc(prob.reshape(1, 1), 0, &confidence, 0, &classIdPoint);
int classId = classIdPoint.x;
//! show result
resize(image, image, Size(500, 500));
// Put efficiency information.
std::vector<double> layersTimes;
double freq = getTickFrequency() / 1000;
double t = net.getPerfProfile(layersTimes) / freq;
std::string label = format("Inference time: %.2f ms", t);
putText(image, label, Point(0, 15), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 255, 0));
// Print predicted class.
label = format("%s: %.4f", (classes.empty() ? format("Class #%d", classId).c_str() :
classes[classId].c_str()),
confidence);
putText(image, label, Point(0, 40), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 255, 0));
imshow("", image);
waitKey(1);
}
}
result :
why is not correct? anyone know?

Réponses (3)

Don Mathis
Don Mathis le 29 Mai 2019
Modifié(e) : Don Mathis le 29 Mai 2019
Could it be that you're multiplying the test image by 1.0/255 before passing it to your imported network? Notice in the MATLAB example that the network was passed an image with pixels in the range [0 255]. It looks like you're normalizing it to [0 1]?
Also, does openCV import images as BGR? If so, you'll need to change the image to RGB because the network expects that.Maybe both of these problems are occurring?
  2 commentaires
David
David le 26 Avr 2021
re: image normalization
When executing an exported ONNX model in say python, it is unclear to me if we're supposed to leave the image in the raw 0-255 range or do some normalization. I have yet to get the same answer in Matlab (classifier accuracy great) and ONNXRuntime in python. Having a hard time finding the right combination of reshaping and image processing in python. What I see on the webs are people doing a sort of mean subtraction for each color plane, but the Matlab code isn't doing any of that, except for imresize.
Any examples would be greatly appreciated.

Connectez-vous pour commenter.


cui,xingxing
cui,xingxing le 30 Mai 2019
Modifié(e) : cui,xingxing le 30 Mai 2019
@Don Mathis, thank you for your reply, it still recognize failure!, what you said I understand, no matter how I change any combination of network input, blobFromImage() has been set to RGB order,[0,255] range.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
In addition, I tried to extract a certain layer feature for "peppers.png". The results of matlab and opencv extraction are also different. For example,extract "pool5-7x7_s1" layer feature:
in matlab:
net = googlenet; % matlab pretrained deep networks
net.Layers
% Read the image to classify
I = imread('peppers.png');
% Adjust size of the image
sz = net.Layers(1).InputSize
I = I(1:sz(1),1:sz(2),1:sz(3));
% Classify the image using AlexNet
while true
tic;
[label,scores] = classify(net, I);
feature = activations(net,I,"pool5-7x7_s1",'OutputAs','columns'); % matlab googlenet
% feature = activations(net,I,"pool5|7x7_s1",'OutputAs','columns'); % bvlc_googlenet
toc
end
View the value of the feature in the workspace:
in opencv,use the googlenet exported using exportONNXNetwork():
void main()
{
Mat img = imread("C:\\Program Files\\MATLAB\\R2019a\\examples\\deeplearning_shared\\peppers.png");
String onnx_path = "mygoogleNet.onnx"; // this is matlab googlenet export onnx file;
// read net
Net net = readNetFromONNX(onnx_path);
if (net.empty())
{
cout << "net is empty!" << endl;
}
net.setPreferableBackend(DNN_BACKEND_OPENCV);
net.setPreferableTarget(DNN_TARGET_CPU);
int net_size = 224;// googlenet net input size
img = img(Rect(0, 0, net_size, net_size)); // keep the same image in matlab
while (true)
{
Mat image = img.clone();
Mat blob;
blobFromImage(image, blob, 1.0, Size(net_size, net_size), Scalar(122.6789, 116.6686, 104.0069),true); // RGB order,[0,255]
//! [Set input blob]
net.setInput(blob);
Mat features = net.forward("pool5_7x7_s1").reshape(1, 1024); // matlab googlenet
//Mat features = net.forward("pool5/7x7_s1").reshape(1,1024); // bvlc_googlenet
}
}
View the value of the feature in "Image Watch":
As can be seen from the comparison in the figure, the same picture, the same network, the same layer and the same input settings, the feature extraction difference is very "different", why, how to solve?
@Don Mathis, Thank you for your prompt reply!
(my environments: win10+matlab2019a+opencv4.1+ lastest onnx converter)
  6 commentaires
vv0u7
vv0u7 le 23 Jan 2020
I have the same problem, did you find a solution?

Connectez-vous pour commenter.


KAAN AYKUT  KABAKÇI
KAAN AYKUT KABAKÇI le 6 Août 2020
Hello,
in my environment the problem was totally about OpenCV version. When i use OpenCV 4.2.0, i was getting different results between MATLAB and Python. After downgrade the OpenCV version to 4.0.0, the problem disappeared. I am using following blobFromImage configuration:
blob = cv2.dnn.blobFromImage(input_image, 1, (512,512), (0,0,0), True, False)
SwapRB=True.
Crop=False.
Shape of my images is (512,512,3)

Catégories

En savoir plus sur Code Generation, GPU, and Third-Party Support dans Help Center et File Exchange

Produits


Version

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by