Ask Me Anything about image analysis or the Mathworks community

Image Analyst le 18 Juin 2024 (modifié(e) le 19 Juin 2024)
Dernière activité Réponse par Nicolas Douillet le 5 Août 2025

Hello, everyone! I’m Mark Hayworth, but you might know me better in the community as Image Analyst. I've been using MATLAB since 2006 (18 years). My background spans a rich career as a former senior scientist and inventor at The Procter & Gamble Company (HQ in Cincinnati). I hold both master’s & Ph.D. degrees in optical sciences from the College of Optical Sciences at the University of Arizona, specializing in imaging, image processing, and image analysis. I have 40+ years of military, academic, and industrial experience with image analysis programming and algorithm development. I have experience designing custom light booths and other imaging systems. I also work with color and monochrome imaging, video analysis, thermal, ultraviolet, hyperspectral, CT, MRI, radiography, profilometry, microscopy, NIR, and Raman spectroscopy, etc. on a huge variety of subjects.
I'm thrilled to participate in MATLAB Central's Ask Me Anything (AMA) session, a fantastic platform for knowledge sharing and community engagement. Following Adam Danz’s insightful AMA on staff contributors in the Answers forum, I’d like to discuss topics in the area of image analysis and processing. I invite you to ask me anything related to this field, whether you're seeking recommendations on tools, looking for tips and tricks, my background, or career development advice. Additionally, I'm more than willing to share insights from my experiences in the MATLAB Answers community, File Exchange, and my role as a member of the Community Advisory Board. If you have questions related to your specific images or your custom MATLAB code though, I'll invite you to ask those in the Answers forum. It's a more appropriate forum for those kinds of questions, plus you can get the benefit of other experts offering their solutions in addition to me.
For the coming weeks, I'll be here to engage with your questions and help shed light on any topics you're curious about.
Nicolas Douillet
Nicolas Douillet le 5 Août 2025
This one is for you then, Mark :P
Frank
Frank le 4 Juil 2025
I have appreciated you insights on many issues here and I am having one now that is making a relatively generic PC I want to use as a platform for my teaching (physics). The base question is here.
Any thoughts you have would be very much appreciated.
Scott
Scott le 3 Avr 2025 (modifié(e) le 3 Avr 2025)
To precisely fit a 2.5-D perspective linear transformation 3x3 matrix that will transform image pixel coordinates to mm locations on a planar target in the world (for precise camera calibration), I have weights (inverse of the standard deviation in location accuracy) that I want to use to minimize the RSME: sqrt(sum((Err[i]*Wt[i])^2))/sum(Wt[i]^2), where Err[i] is the distance of a transformed input point and its corresponding [given] output point. Wt[i] is the weight (or 1/Sigmas[i]). This is standard way to compute weighted sigma or RMS. (Unweighted, RSME is sqrt(sum(Err[i]^2)/N).) I need a version of estimateGeometricTransform2D() or estgeotform2d() that uses weights. OpenCV's solveHomography() works well exept it doesn't accept weights either. SVD from Numerical Recipes in C, cv::solve(), and Numfy all accept weights, but sometimes fail with homogenous transforms. Hopefully, someone has already solved this important problem.
Neha
Neha le 29 Mar 2025 (modifié(e) le 29 Mar 2025)
I am working on Jones matrix microscopy, but unable to generate interferogram. vertically polarized light from a He–Ne laser is converted into 45 deg linearly polarized light with the help of a half-wave plate (HWP1) oriented at 22.5deg with respect to the vertical direction and is spatially filtered and collimated after passing through the spatial filter assembly and lens.The beam splitter splits the collimated beam with equal intensity into two arms of the interferometer.The beam transmitted bybeam splitter passes through a triangular Sagnac interferometer embedded into a telescope assembly of lenses .The 45 deg polarized beam enters the polarization beam splitter and splits into two counter propagating orthogo nal polarization components in the triangular Sagnac geometry, and the light emits from the beam splitter as angular multiplexed orthogonally polarized components.The mirrors introduce the desired amount of tilt in the emergent orthogonally polarized components Ox andOy,which can be represented as Ox(r)= exp(iα1r)and Oy(r)= exp(iα2r) ; where α1 and α2 are the frequency coefficients introduced to the orthogonal polarization components due to their off-axis locations in the front focal plane of lens,and r is thet ransverse spatial coordinate. I am using same equation but dont know where i miss thats why my interference pattern is not properly visible... so please suggest me how to proceed till Ox and Oy... and also suggest me good books and courses related to this topic. Thank you in advance.
Richard Z.
Richard Z. le 28 Mar 2025
Hi Mark and thank you for your effort. I hope this is the right place for my question, its propably a easy one but I didn't find out whats my understanding Problem.
So I was introducing myself in colorspace and tried to reproduce the CIE Colorspace and playing around with these Tristimulus and RGB values. I wrote a script and first import the CIE RGB Colorspacevalues from the standard. These Values are X, Y and Z. I then normalize them by
x=X / (X+Y+Z)
y=Y / (X+Y+Z)
z=1-x-y
to get the chromaticity diagram. I have then three vectors x, y and z. For my understanding these values are taken to draw the spectral locus, the boundary of all perceivable colors. I can produce a plot with the outline and with the interpolated colors inside (patch command).
v = [x, y];
f = 1:1:size(v,1);
col = [x, y, z];
patch('Faces',f,'Vertices',v,'FaceVertexCData',col,'FaceColor','interp', 'EdgeColor', 'none');
plot3(x,y,1-x-y, 'LineWidth',1.5, 'color', 'black')
But if I plot this I get an odd and flat patch. And this doesen't looks like the patches what are posted in wikipedia.
So my question is, when you look at this values, than you see that none of the basic RGB Values are in each of this corner. You never have a pure red (Values are 1 0 0). Every corner is always a mixture of the three colors. But if I look at the wikipedia entry, it looks that in the corners are the basic colors. Why is that so?
A problem of course is that the sRGB Colorspace is inside this patch, so it is not possible for me to see every color on screen right. But does this meen that if i display a complety green, that I actually see a mixture of three colors in my eyes?
Hopefully someone can help my. Thanks everyone for your time and effort!
Greetings
Manmeet
Manmeet le 4 Nov 2024
Hi there,
any must read book on fundamentals of image processing. For example book explaing why and how behind Gaussian blurr, denosing etc? Math behind these technieques :-)
Image Analyst
Image Analyst le 5 Nov 2024
My favorite image processing book is The Image Processing Handbook by John Russ. It shows a wide variety of examples of algorithms from a wide variety of image sources and techniques. It's light on math so it's easy to read.
There is also a Book by Steve Eddins, former leader of the image processing team at Mathworks. Has MATLAB code with it.
You might also want to look at the online book http://szeliski.org/Book/
瑠偉
瑠偉 le 27 Sep 2024
Hi.I tried to run this program, but it produces a lot of noise and does not detect people cleanly. What can I do to improve the program to remove the noise and detect people more clearly?
Image Analyst
Image Analyst le 6 Nov 2024
Sorry, I don't really know and don't do much video tracking. I'd guess that you just use normal things like shape and size analysis to throw out blobs that aren't shaped and sized like real people.
Abhishek
Abhishek le 23 Sep 2024
Hi, I want to remove the background blurred white spots from the image which is at background and of varing size and extract the outline of the foreground bubbles. Can you please help me in this regard?
Image Analyst
Image Analyst le 23 Sep 2024
Not sure what you mean by "remove". You can replace the background by a defined gray level if you want, but you have to have something there.
You can also segment the image and make a mask of background and non-background pixels. Assuming the background is blurred and smooth, the local standard deviation will be low there. Therefore you should try stdfilt and then threshold the image. Something like
sdImage = stdfilt(grayImage, ones(windowWidth));
mask = sdImage > 10;
imshow(mask);
If you need more help, ask again in the Answers forum.
Abhishek
Abhishek le 24 Sep 2024
Hello sir, thanks for your valuable reply, so here is the problem in more eloborated way, I have an image with scattered white spots of various sizes , here and there in the image as shown (in figure and there are many more of these). I want to replace those white spots with a suitable pixel value, so that if I choose that pixel value as threshold the background as well as these white spots will not be there. The approches what I have tried so far are as follows.
  1. first binarize the image with adapthresh with factor 1, then use this binarize image as a mask, then use regionfill to replace this white spots, then use a suitable pixel value to replace the background pixel with 0. It did not worked , as majority of infocus bubble surface are with the pixel value 255, in the gray image ,means it is getting detected by binarization factor 1.
  2. first choose the a suitable pixel value as threshold , replace the background with pixel value 0, then for blurred white spots in background use bwareaopen and a suitable area threshold to remove. but this does not work as the sizes of the spots are different.
  3. to choose those particular portion and replace those values will not work as I have a set of images , and this type of white spots are appearing in each and every case.
  4. I also tried to fix it with lights but , every time they appear in those image
Abhishek
Abhishek le 26 Sep 2024
Hello Sir,
Here is the scipt I am using, along with image, segmented image,, could you help me to visulalize the bubble surfaces and edges more accurately or what may be the changes I can do and where I will do the changes to visualize and Identify the bubble surfaces more accurately.
Abhishek
Abhishek le 30 Sep 2024
Hello Sir, (@Image Analyst) can you help me with this code?
krn
krn le 4 Août 2024
how to do MATLAB model prepration for a automated saw
Image Analyst
Image Analyst le 4 Août 2024
@krn I don't think this is something that I can answer directly now since you didn't supply enough information. I suggest you ask in the Answers forum after reading this link: TUTORIAL: How to ask a question (on Answers) and get a fast answer
Be sure to explain if you want a model in Simulink or MATLAB. Also include any data you may have. And say if you need any instrument control, like are you sending commands to your saw, or are you getting some kind of signal back from your saw or other sensors monitoring the saw's operation. Say whether you want some kind of model (formula) that fits or predicts some signal that you're interested in, like a machine learning model like Gaussian Process Regression, Support Vector Machine, Decision Trees, polynomial, etc.
But before that I suggest you read this link:
and this link:
Alejandro
Alejandro le 29 Juil 2024
Hi! I'm creating a scene detector using data obtained with the Video Labeller Matlab app for scene classification. Problem is: all the data from the Video Labeller is exported as timetables, which are ironically not compatible with scenelabeltrainingdata. Is there a way to train the video with this data?
Image Analyst
Image Analyst le 1 Août 2024
Sorry I haven't used that particular function and don't have a need to, so I don't know. I know less than you about it. I suggest posting to Answers or calling tech support.