Calculating the disparity map from a single camera
3 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
Hey all,
I have a grid of points being viewed by a single camera. That grid is being distorted due to the introduction of a curved lens into the camera's field of view. I want to track the movement of the points and, if a point moves more than a certain amount (say 10% etc,) then the distortion is too high and that lens is discarded. Originally, I was performing blob analysis on a video that captured the introduction of the lens and the distortion it caused--allowing me to store the location of each point throughout the video and compare them to their original location. This was working okay but takes a lot of processing time and sometimes misses tracks.
However, I was thinking that it would be easier to just take two images and calculate the disparity/disparity map. I can only find information on disparity calculations between stationary images from two cameras separated by a distance. Does anyone know how to use the disparity function and parameters when you have two different images from a single stationary camera?
Thank you
1 commentaire
Swarooph
le 29 Juin 2016
I think this is a really interesting question but fair warning that I don't have a full answer or an answer specifically on how to use the disparity methodology with non stereo images. Just an idea maybe to think about:
I think it will be interesting to divide your algorithm into 'detection' and 'tracking'. Your blob analysis helps 'detect' the object of interest. However to 'track' it further you could probably consider one of several 'Object Tracking' algorithms shown here: http://www.mathworks.com/help/vision/object-tracking.html
Object tracking is 'supposed' to be a less expensive process compared to detection algorithms. So the idea is, detect the area of interest using blob analysis or other object detection algorithms in the first frame (or the frame when the object is available to be detected for the first time) then you switch to a tracking algorithm to keep track of the object. However, when the tracking of the object is LOST (because of occlusion, noise etc.), we would use the detection algorithm again to reinitialize the tracking.
By doing this, we hope to rely less on the computationally expensive detection process, and more on the potentially efficient tracking process.
Réponses (0)
Voir également
Catégories
En savoir plus sur Computer Vision with Simulink dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!