image coordinates to robot coordinates

5 vues (au cours des 30 derniers jours)
Ra'ad
Ra'ad le 1 Nov 2020
Hello to Community members.
2D image coordinates to 3D robot coordinates is an old topic with so many threads but there is a confusion i need to ask..
i am working on a project of guiding the 6 DOF ABB Industrial robotic arm using a Full HD Webcam to reach a designated location. Almost all the previous threads i have gone through mentions calibration and transformation of smaller robots where calculations between image and robot base can be done usign small ruler.
2nd camera is calibrated using checkerboard method. which is suggested to be calibrated using 20 or more checkerboard images. after calibration it gives "Rotation Matrices of 3*3 for 20 images and translational vectors of 1*3 for 20 images.
As in actual process guide, for camera extrinsic calculation R and T matrices are used, Which one rotataion and translational vector among these 20 we will use for calculation of base transformation in actual process guide? Sorry for a childish question but i will be thankful for a decent guide.
can somebody share his experience in detail that how to transform or map 2D image coordinates of calibrated fisheye camera to 3D robot coordinates (for 6 dof industrial robot) so the robotic arm moves to exact position deteted by webcam.? (Distance in real time between Image coordinates (u,v) and robot 3D coordinate (x,y,z) of base or TCP.
i am not asking for a complete solution, rather i need some useful hints to guide me in proper way. (the camera will be mounted on robotic arm)
regardss
  1 commentaire
venkey
venkey le 22 Déc 2024
will you pls help me with the problem.i am also trying the same project.will you share the process and matlab code you had already done the project.

Connectez-vous pour commenter.

Réponses (1)

Pratyush Swain
Pratyush Swain le 12 Jan 2025
Hi Ra'ad,
I understand you want to convert 2D image coordinate to 3D world coordinates.
To calculate intrinsic parameters, 'cameraIntrinsics' function can be used. You can calculate extrinsic parameters using rotationMatrix and translationVector. Once, you have estimated the instrinsic and extrinsic parameters of the camera, you can use 'triangulate' function to construct world coordinates.
worldPoints = triangulate(imagePoints, extrinsics, intrinsics);
Kindly refer to the following links to know more about the functions:
  1. cameraIntrinsics(): https://www.mathworks.com/help/vision/ref/cameraintrinsics.html
  2. triangulate(): https://www.mathworks.com/help/vision/ref/triangulate.html
Also it would be helpful to refer to this detailed example on estimating pose of moving camera mounted on a robot: https://www.mathworks.com/help/vision/ug/estimate-pose-of-moving-camera-mounted-on-a-robot.html
Hope this helps.

Produits

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by