color calibration, mean square error

i performed a radiometric calibration of the camera.how can i compare the color differences to the ground truth valuse before and after the calibration of the camera by means of a mean square error?how can i do the mean square evaluation and how to interpret the graysccale fitting?

 Réponse acceptée

Image Analyst
Image Analyst le 16 Fév 2019
Try this
deltaEImage = sqrt((lImage-refLImage).^2 + (aImage-refAImage).^2 + (bImage - refBImage).^2);
meanColorDifference = mean2(deltaEImage)
where the l, a, and b images are the calibrated LAB color space images.
I have no idea what you mean by grayscale fitting. It's a color image so to get calibrated values you must use calibrated standards and go to LAB color space. For computing color differences, you shouldn't be doing anything in RGB or grayscale color spaces. Those aren't calibration. Standardization maybe but not calibration.

7 commentaires

sara
sara le 16 Fév 2019
thank you for your answer.but,we cant do that cause our target image is just the color checker.we somehow have to reduce the corrected image to just contain the color checker and compare these two.
Image Analyst
Image Analyst le 17 Fév 2019
Of course you can do it because I do it all the time. Believe it or not, I actually teach courses on this topic all over the world. One next month, another one the month after that, .....
If you want to crop your image to the color checker chart, it depends on what your image looks like. If the background is not black, you can just threshold for black, then fill holes, get the bounding box, and call imcrop. But I see no need for "reducing" (cropping).
If you want to do RGB standardization (matching an arbitrary RGB image to a reference RGB image, you need to measure the RGB values in the chips in both images and develop a transform using least squares to map the arbitrary one to what it would look like as if it had been taken under the same circumstances as the reference one.
RGB standarization is probably not really needed unless you want to compare two images on the same monitor.
What I always do is RGB calibration. This is calibration to a true colorimetric colorspace, lab color space, that is like how people respond to images. This will give you values like you'd get from a spectrophotometer - the gold standard for color measurement. Again no need to crop. You just read the chip colors, develop the transform, then apply the transform to every pixel of the image to correct it.
See attached tutorial.
sara
sara le 18 Fév 2019
thank you.it was really helpful.specially the slides.
aguadopd
aguadopd le 13 Juin 2019
Dear @Image Analyst:
Thank you for the answers (this one and https://www.mathworks.com/matlabcentral/answers/58501-how-to-do-color-correction ) and the very informative slides. In them you suggest using LUTs for grayscale correction, but using regression for estimating a matrix for color correction. Why not using a LUT for each of the RGB channels? I understand that we are exchanging the estimation of 9 (at least) parameters for 3x(number_of_color_chips), say 3x24=72 parameters, but wouldn't it be more accurate?
Couldn't find a better place to ask you, sorry if this is not OK.
Image Analyst
Image Analyst le 14 Juin 2019
You could use a 3-D look up table. It would have 16.7 million elements (256 times 256 times 256), which used to be considered a lot of memory but not any longer. Whether this is faster than doing the multiplication is something you'd have to check. You'd have to apply the 3-D lut with a loop since intlut() only works for gray scale images.
aguadopd
aguadopd le 14 Juin 2019
Modifié(e) : aguadopd le 14 Juin 2019
What a shame, I completely forgot about all the other values in the LUT! I will then try and see what happens and how long it takes. I will always have a checker in my images so I guess that forcing all the observed color to be exactly as the reference won't be a mistake, as long as we have more or less decent images.
Again, thank you a lot for your contributions!
EDIT:
I could use
  1. A 3D LUT with 256*256*256 elements
  2. 3x 2D LUTs with 256 elements each; a total of 256 x 3
The second case treats each channel as independent, which may not be a safe assumption.
I don't understand using 2-D luts. What are the different axes for?
Basically you'd need one 256x256x256 for each output, R, G, and B. So the input R, G, B would be the 3 indexes, and the value of the LUT would be the new R value. Then another LUT for the new green values, and another one for the new blue values, so you'd need three 16.7 million element 3-D LUTs to do a full conversion of an input RGB to a new/estimated output RGB.
For example if your input R was 111, input G was 222, and input B was 123, to get the new R you'd do
newR = redLUT(111,222,123);
Then to get the new G and B you'd use the LUTs you created for them:
newG = greenLUT(111,222,123);
newB = blueLUT(111,222,123);
Useing three 1-D luts to convert red into a new red, etc., would be good only if the colors didn't change in any way other than a brightness shift or stretch -- no hue change etc.

Connectez-vous pour commenter.

Plus de réponses (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by