photo

Qu Cao

MathWorks

Last seen: environ un mois il y a Actif depuis 2016

Followers: 2   Following: 0

I'm an Automated Driving and Mapping Engineer at MathWorks and a Mechanical Engineer by education. DISCLAIMER: Any advice or opinions posted here are my own, and in no way reflect that of MathWorks.

Statistiques

  • Knowledgeable Level 4
  • Knowledgeable Level 3
  • 3 Month Streak
  • Revival Level 3
  • First Answer

Afficher les badges

Feeds

Afficher par

Réponse apportée
In stereocalibration, is the relationship between the 'R and T output as PoseCamera2' and the actual camera position the same, or does the sign of x in T reverse?
Sorry for the confusion. We will update our documenation to be more specific about the meaning of PoseCamera2. PoseCamera2 is t...

6 mois il y a | 0

| A accepté

Réponse apportée
detectSIFTFeatures only working for uint8
Use im2double: I = imread('cameraman.tif'); points = detectSIFTFeatures(im2double(I))

8 mois il y a | 1

Réponse apportée
estworldpose giving different answers on each run.
estworldpose is a RANSAC-based method. You may want to set the random seed before running the function to get persistent results...

environ un an il y a | 0

| A accepté

Réponse apportée
Replacing vision.GeometericTransformEstimator call
https://www.mathworks.com/matlabcentral/answers/521519-what-function-replaced-vision-geometrictransformestimator

plus d'un an il y a | 0

Réponse apportée
I have two camera parameters from stereoParams. Which one should I choose for Stereo Visual SLAM application? Or do I just get their mean values?
Usually, the focal length of the two cameras are the same. You can use either one.

plus d'un an il y a | 0

Réponse apportée
Defining Feature detection area.
You can specify the ROI that you want to extract features from.

plus d'un an il y a | 0

| A accepté

Réponse apportée
unit of translation result from estrelpose function
As the documentation of estrelpose says, the function calculates the camera location up to an unknow scale. This is becuase you...

presque 2 ans il y a | 0

| A accepté

Réponse apportée
Difficulties in obtaining good results with the ORB-SLAM2 algorithm in MATLAB.
Thank you for posting the question. In general, tuning the hyperparameters for a visual SLAM system can be hard and requires a...

presque 2 ans il y a | 1

| A accepté

Réponse apportée
vSLAM: vSLAM algorithm is very sensitive to hyperparameters Issue?
You find the nature of the SLAM problem. Yes, the visual SLAM system is sensitive to hyperprameters which usually need to be tun...

presque 2 ans il y a | 0

| A accepté

Réponse apportée
How to construct stereoParameters with intrinsic and extrinsic matrix?
poseCamera2 essentially transforms camera 2 to camer 1. If you have “the transaltion and rotation from camera1 to camera 2”, (le...

environ 2 ans il y a | 1

Réponse apportée
Object 3D world coordinates from multiple images
You will need a stereo camera to give you the actual dimension of 3-D objects. Alternatively,if you know the size of an object...

environ 2 ans il y a | 0

Réponse apportée
creating a bag of features for new image set for monocular SLAM
The bag-of-features data may not work for the KITTI dataset because it was trained using a small amount of image data. You may w...

environ 2 ans il y a | 1

| A accepté

Réponse apportée
The Premultiply Convention in Geometric Transformations does not support C/C++ code generation?
Thank you for reporting this. There is a bug in the documentation. All the geometric transformation objects with the premultiply...

environ 2 ans il y a | 0

| A accepté

Réponse apportée
How to use reconstructScene with a disparity map from file, without calling rectifyStereoImages ?
You can use the reprojectionMatrix output from rectifyStereoImages to do the reconstruction. Otherwise, you need to save the ste...

plus de 2 ans il y a | 0

| A accepté

Réponse apportée
Match the coordinate systems of "triangulate" and "reconstructScene" with "disparitySGM"
The point cloud generated from reconstructScene is in the rectified camera 1 coordinate. Starting in R2022a, you can use the ad...

plus de 2 ans il y a | 0

| A accepté

Réponse apportée
MATLAB Simulate 3D Camera: why is there no focal length (world units) attribute in the sensor model?
Please take a look at this page: https://www.mathworks.com/help/vision/ug/camera-calibration.html#bu0ni74 If you know the size...

plus de 2 ans il y a | 0

Réponse apportée
How to port SLAM algorithm to embedded platform?
Unfortunately, as of R2022a the visual SLAM pipeline doesn't support code generation yet. We're actively working on this suppopr...

plus de 2 ans il y a | 1

| A accepté

Réponse apportée
how to get the relative camera pose to another camera pose?
Note that the geometric transformation convention used in the Computer Vision Toolbox (CVT) is different from the one used in th...

plus de 2 ans il y a | 2

| A accepté

Réponse apportée
How to get 3D world coordinates from 2D image coordinates?
You should use the rectified stereo images. The disparityMap computed from disparitySGM should have the same size as your stereo...

plus de 2 ans il y a | 0

Réponse apportée
Creating a depth map from the disparity map function
You can use reconstructScene for your workflow.

presque 3 ans il y a | 0

Réponse apportée
Unable to use functions from the Computer Vision Toolbox in Simulink MATLAB function block
A workaround is to declare the function as an extrinsic function so that it will be essentially executed in MATLAB: https://www...

presque 3 ans il y a | 0

| A accepté

Réponse apportée
how to get texture extraction using LBP features in MATLAB?
You can use the extractLBPFeatures function.

environ 3 ans il y a | 0

Réponse apportée
About error of helperVisualizeMotionAndStructureStereo
In helperVisualizeMotionAndStructureStereo.m, please note the following code in retrievePlottedData which discards xyzPoints out...

environ 3 ans il y a | 2

Réponse apportée
About SLAM initial Pose data
The initial pose data is provided by the dataset. It's used to convert the 3-D reconstruction into the world coordinate system. ...

environ 3 ans il y a | 0

Réponse apportée
About "slam" on my camera device
The example shows how to run stereo visual SLAM using recorded data. It doesn't support "online" visual SLAM yet, meaning that y...

environ 3 ans il y a | 0

Réponse apportée
Is Unreal Engine of the Automated Driving Toolbox available on Ubuntu?
As of R2021a, only Windows is supported. See Unreal Engine Simulation Environment Requirements and Limitations.

plus de 3 ans il y a | 1

Réponse apportée
why we use Unreal engine when there is a 3D visualization available in Automated driving toolbox?
It's not just used for visualization. With Unreal, you can configure prebuilt scenes, place and move vehicles within the scene, ...

plus de 3 ans il y a | 0

| A accepté

Réponse apportée
About running a stereo camera calibrator
In general, you can use any type of stereo camera and calibrate its intrinsic parameters using the Stereo Camera Calibrator. You...

plus de 3 ans il y a | 0

Réponse apportée
How to obtain optimal path between start and goal pose using pathPlannerRRT() and plan()?
Please set the random seed at the beginning to get consistent results across different runs: https://www.mathworks.com/help/mat...

plus de 3 ans il y a | 0

| A accepté

Réponse apportée
Does vehicleCostmap this type of map only support pathPlannerRRT object to plan a path? Can I use another algorithm to plan a path?
You can create an occupancyMap object from a vehicleCostmap object using the following syntax: map = occupancyMap(p,resolution)...

plus de 3 ans il y a | 0

Charger plus