Ask Me Anything about image analysis or the Mathworks community
Dernière activité Réponse par Nicolas Douillet
le 5 Août 2025
Hello, everyone! I’m Mark Hayworth, but you might know me better in the community as Image Analyst. I've been using MATLAB since 2006 (18 years). My background spans a rich career as a former senior scientist and inventor at The Procter & Gamble Company (HQ in Cincinnati). I hold both master’s & Ph.D. degrees in optical sciences from the College of Optical Sciences at the University of Arizona, specializing in imaging, image processing, and image analysis. I have 40+ years of military, academic, and industrial experience with image analysis programming and algorithm development. I have experience designing custom light booths and other imaging systems. I also work with color and monochrome imaging, video analysis, thermal, ultraviolet, hyperspectral, CT, MRI, radiography, profilometry, microscopy, NIR, and Raman spectroscopy, etc. on a huge variety of subjects.
I'm thrilled to participate in MATLAB Central's Ask Me Anything (AMA) session, a fantastic platform for knowledge sharing and community engagement. Following Adam Danz’s insightful AMA on staff contributors in the Answers forum, I’d like to discuss topics in the area of image analysis and processing. I invite you to ask me anything related to this field, whether you're seeking recommendations on tools, looking for tips and tricks, my background, or career development advice. Additionally, I'm more than willing to share insights from my experiences in the MATLAB Answers community, File Exchange, and my role as a member of the Community Advisory Board. If you have questions related to your specific images or your custom MATLAB code though, I'll invite you to ask those in the Answers forum. It's a more appropriate forum for those kinds of questions, plus you can get the benefit of other experts offering their solutions in addition to me.
For the coming weeks, I'll be here to engage with your questions and help shed light on any topics you're curious about.
113 commentaires
Date décroissanteI have appreciated you insights on many issues here and I am having one now that is making a relatively generic PC I want to use as a platform for my teaching (physics). The base question is here.
Any thoughts you have would be very much appreciated.
To precisely fit a 2.5-D perspective linear transformation 3x3 matrix that will transform image pixel coordinates to mm locations on a planar target in the world (for precise camera calibration), I have weights (inverse of the standard deviation in location accuracy) that I want to use to minimize the RSME: sqrt(sum((Err[i]*Wt[i])^2))/sum(Wt[i]^2), where Err[i] is the distance of a transformed input point and its corresponding [given] output point. Wt[i] is the weight (or 1/Sigmas[i]). This is standard way to compute weighted sigma or RMS. (Unweighted, RSME is sqrt(sum(Err[i]^2)/N).) I need a version of estimateGeometricTransform2D() or estgeotform2d() that uses weights. OpenCV's solveHomography() works well exept it doesn't accept weights either. SVD from Numerical Recipes in C, cv::solve(), and Numfy all accept weights, but sometimes fail with homogenous transforms. Hopefully, someone has already solved this important problem.
I am working on Jones matrix microscopy, but unable to generate interferogram. vertically polarized light from a He–Ne laser is converted into 45 deg linearly polarized light with the help of a half-wave plate (HWP1) oriented at 22.5deg with respect to the vertical direction and is spatially filtered and collimated after passing through the spatial filter assembly and lens.The beam splitter splits the collimated beam with equal intensity into two arms of the interferometer.The beam transmitted bybeam splitter passes through a triangular Sagnac interferometer embedded into a telescope assembly of lenses .The 45 deg polarized beam enters the polarization beam splitter and splits into two counter propagating orthogo nal polarization components in the triangular Sagnac geometry, and the light emits from the beam splitter as angular multiplexed orthogonally polarized components.The mirrors introduce the desired amount of tilt in the emergent orthogonally polarized components Ox andOy,which can be represented as Ox(r)= exp(iα1r)and Oy(r)= exp(iα2r) ; where α1 and α2 are the frequency coefficients introduced to the orthogonal polarization components due to their off-axis locations in the front focal plane of lens,and r is thet ransverse spatial coordinate. I am using same equation but dont know where i miss thats why my interference pattern is not properly visible... so please suggest me how to proceed till Ox and Oy... and also suggest me good books and courses related to this topic. Thank you in advance.
Hi Mark and thank you for your effort. I hope this is the right place for my question, its propably a easy one but I didn't find out whats my understanding Problem.
So I was introducing myself in colorspace and tried to reproduce the CIE Colorspace and playing around with these Tristimulus and RGB values. I wrote a script and first import the CIE RGB Colorspacevalues from the standard. These Values are X, Y and Z. I then normalize them by
x=X / (X+Y+Z)
y=Y / (X+Y+Z)
z=1-x-y
to get the chromaticity diagram. I have then three vectors x, y and z. For my understanding these values are taken to draw the spectral locus, the boundary of all perceivable colors. I can produce a plot with the outline and with the interpolated colors inside (patch command).
v = [x, y];
f = 1:1:size(v,1);
col = [x, y, z];
patch('Faces',f,'Vertices',v,'FaceVertexCData',col,'FaceColor','interp', 'EdgeColor', 'none');
plot3(x,y,1-x-y, 'LineWidth',1.5, 'color', 'black')
But if I plot this I get an odd and flat patch. And this doesen't looks like the patches what are posted in wikipedia.

So my question is, when you look at this values, than you see that none of the basic RGB Values are in each of this corner. You never have a pure red (Values are 1 0 0). Every corner is always a mixture of the three colors. But if I look at the wikipedia entry, it looks that in the corners are the basic colors. Why is that so?
A problem of course is that the sRGB Colorspace is inside this patch, so it is not possible for me to see every color on screen right. But does this meen that if i display a complety green, that I actually see a mixture of three colors in my eyes?
Hopefully someone can help my. Thanks everyone for your time and effort!
Greetings
Hi there,
any must read book on fundamentals of image processing. For example book explaing why and how behind Gaussian blurr, denosing etc? Math behind these technieques :-)
Hi.I tried to run this program, but it produces a lot of noise and does not detect people cleanly. What can I do to improve the program to remove the noise and detect people more clearly?

Hi, I want to remove the background blurred white spots from the image which is at background and of varing size and extract the outline of the foreground bubbles. Can you please help me in this regard?
how to do MATLAB model prepration for a automated saw
Hi! I'm creating a scene detector using data obtained with the Video Labeller Matlab app for scene classification. Problem is: all the data from the Video Labeller is exported as timetables, which are ironically not compatible with scenelabeltrainingdata. Is there a way to train the video with this data?
Connectez-vous pour participer

