estimateGravityRotation
Syntax
Description
[
estimates the gravity rotation, using factor graph optimization, that helps transform the
input gRot
,info
] = estimateGravityRotation(poses
,gyroscopeReadings
,accelerometerReadings
)poses
to the local navigation reference frame of an inertial
measurement unit (IMU). gyroscopeReadings
and
accelerometerReadings
are the gyroscope readings and accelerometer
readings of the IMU, respectively. The function also returns info
, which
contains the validation metrics for the estimate, as well as velocity and bias estimates to
use to update factor graph. For more information about gravity rotation and validating the
estimates, see Gravity Rotation and Pose Scale.
Note
Input poses must be in the initial IMU reference frame, unless you specify the
SensorTransform
name-value argument. Then, the poses can be in a
different frame. This function assumes that the input poses are in the real world scale.
If your input poses are in an unknown scale, you can estimate their pose scale and
gravity rotation by using the estimateGravityRotationAndPoseScale
function.
[
specifies options using one or more name-value arguments to improve the estimation process,
in addition to the input arguments from the previous syntax. For example,
gRot
,info
] = estimateGravityRotation(___,Name=Value
)FixPoses=true
specifies for the function to treat the input poses as
fixed parameters during the optimization process of estimation.
Examples
Specify input poses in the first camera pose reference frame.
poses = [0.1 0 0 0.7071 0 0 0.7071; ...
0.1 0.4755 -0.1545 0.7071 0 0 0.7071];
Specify 10 gyroscope and accelerometer readings between consecutive camera frames.
accelReadings = repmat([97.9887 -3.0315 -22.0285],10,1); gyroReadings = zeros(10,3);
Specify IMU parameters.
params = factorIMUParameters(SampleRate=100, ... ReferenceFrame="NED");
Specify a transformation consisting of 3-D translation and rotation to transform input poses from the initial camera pose reference frame to the initial IMU pose reference frame. Initial sensor reference frame has first sensor pose at the origin of the sensor reference frame.
sensorTransform = se3(eul2rotm([-pi/2 0 0]),[0 0.1 0]);
Specify factor graph solver options.
opts = factorGraphSolverOptions(MaxIterations=50);
Estimate gravity rotation using IMU measurements between camera pose estimates.
[gRot,solutionInfo] = estimateGravityRotation(poses, ... {gyroReadings},{accelReadings}, ... IMUParameters=params, ... SensorTransform=sensorTransform, ... SolverOptions=opts)
gRot = 3×3
0.0058 -0.6775 -0.7355
0.1023 0.7320 -0.6736
0.9947 -0.0713 0.0735
solutionInfo = struct with fields:
Status: SUCCESS
TranslationErrors: [0.0088 0.0092 5.9933e-04]
RotationErrors: [0.0092 0.0151 0.0481]
VelocityInIMUReference: [2×3 double]
Bias: [2×6 double]
PoseToIMUTransform: [1×1 se3]
PosesInIMUReference: [2×7 double]
InitialCost: 2.3123e+03
FinalCost: 19.0067
FixedNodeIDs: [0 2 3 6]
IsSolutionUsable: 1
NumSuccessfulSteps: 23
NumUnsuccessfulSteps: 18
OptimizedNodeIDs: [1 4 5 7 8]
TerminationType: 0
TotalTime: 0.0029
Use gravity rotation to transform gravity vector from local navigation frame to initial camera pose reference frame.
% Gravity direction in NED frame is along Z-Axis. gravityDirectionNED = [0; 0; 1]; % Gravity direction in pose reference frame. gravityDirection = gRot*gravityDirectionNED
gravityDirection = 3×1
-0.7355
-0.6736
0.0735
% Gravity vector in pose reference frame.
gravityMagnitude = 9.81;
gravity = gravityDirection*gravityMagnitude
gravity = 3×1
-7.2149
-6.6076
0.7214
Input Arguments
Camera or lidar poses, with translation component of each pose in meters, estimated by stereo-visual-inertial or lidar-inertial systems, respectively, specified as one of these pose types:
N-by-7 matrix, where each row is of the form [x y z qw qx qy qz]. Each row defines the xyz-position, in meters, and quaternion orientation, [qw qx qy qz].
Array of
se3
objects.Camera pose table returned by the
poses
(Computer Vision Toolbox) function of theimageviewset
(Computer Vision Toolbox) object.Array of
rigidtform3d
(Image Processing Toolbox) objects.
Gyroscope readings between consecutive camera views or poses, specified as an
(N-1)-element cell array of M-by-3 matrices,
in radians per second. N is the total number of camera views or poses
specified in poses
. M is the number of gyroscope
readings between consecutive camera views and the three columns of
gyroscopeReadings
represent the [x
y
z] measurements between the camera views or poses.
Note that there can be a different number of readings between consecutive camera views.
Data Types: cell
Accelerometer readings between consecutive camera views or poses, specified as an
(N-1)-element cell array of M-by-3 matrices,
in meters per second squared. N is the total number of camera views
or poses specified in poses
. M is the number of
accelerometer readings between consecutive camera views and the three columns of
accelerometerReadings
represent the [x
y
z] measurements between the camera views or poses.
Note that there can be a different number of readings between consecutive camera views.
Data Types: cell
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Example: estimateGravityRotation(poses,gyroscopeReadings,accelerometerReadings,IMUParameters=factorIMUParameters(SampleRate=100))
estimates the gravity rotation based on an IMU.
IMU parameters, specified as a factorIMUParameters
object.
Each IMU may have different characteristics specified by its IMU parameters. The
default parameter values in the factorIMUParameters
object specify identity noise covariances, which are
significantly larger than typical IMU covariances in practical scenarios. So,
estimating and specifying IMU parameters is important for accurate estimation of
gravity rotation. If the noise parameters of IMU, AccelerometerNoise
, AccelerometerBiasNoise
, GyroscopeNoise
, and GyroscopeBiasNoise
, are unknown, use Allan variance analysis
of IMU measurements to estimate them. For an example how to use the Allan variance to
determine noise parameters, see Inertial Sensor Noise Analysis Using Allan Variance.
Example: IMUParameters=factorIMUParameters(SampleRate=100)
Solver options, specified as a factorGraphSolverOptions
object.
Example: SolverOptions=factorGraphSolverOptions(MaxIterations=50)
Transformation from the input pose reference frame to the initial IMU sensor
reference frame, specified as an se3
object.
Transformation between the input pose reference frame and initial IMU frame consist of 3-D translation and rotation. For example, if the input poses are camera poses in the initial camera sensor reference frame, then the sensor transform rotates and translates a pose or a point in the initial camera sensor reference frame to the initial IMU sensor reference frame. The origin of the initial sensor reference frame is the first pose for that sensor.
In some cases, the camera and IMU might be rigidly attached to a common base reference frame, and poses are computed in the base reference frame. In this case, the sensor transform specifies the transformation from the base reference frame to the IMU reference frame. If the sensor transform is unknown, you can estimate it by using extrinsic calibration. For an example that shows the process of extrinsic calibration between a camera and an IMU to estimate an SE(3) homogeneous transformation, see Estimate Camera-to-IMU Transformation Using Extrinsic Calibration.
Example: SensorTransform=se3(eul2rotm([-pi/2 0 0]),[0 0.1
0])
Since R2025a
Fixed input poses during estimation, specified as a logical true
or false
. When set to true
, the function treats the
input poses as fixed parameters during the optimization process of estimation. When
set to false
, the function treats the input poses as adjustable
parameters that it can refine during optimization, but only within a small
neighborhood of their specified values. Not fixing the poses provides the estimator
with more freedom, and generally achieves better results. If you expect that the input
poses are highly accurate, fixing them is computationally less expensive, and can
result in better accuracy.
Since R2025a
Initial velocity guess for the IMU, specified as a three element row vector. By default, the initial velocity is unknown. Specifying the initial velocity helps with bias estimation. You can compute the initial velocity from GPS measurements.
Example: InitialVelocity=[0 0 0] specifies the initial velocity in all
directions as 0
Data Types: double
Since R2025a
IMU pose prediction threshold, specified as a two-element row vector of the form
[rotationErrorThreshold translationErrorThreshold]. If any
rotation error in the RotationErrors
field or any translation error
in the TranslationErrors
field of info
exceeds
rotationErrorThreshold,
translationErrorThreshold respectively, the
Status
field of the info
structure indicates a
failure due to a bad prediction.
The rotation error and translation error measure how well the IMU pose predictions align with the input poses after estimation. This argument defines the maximum acceptable rotation and translation errors in radians and metres respectively.
Rotation error is the difference in relative rotation between a pair of successive input poses computed directly from the input poses and using IMU pose prediction.
Translation error is the difference in relative translation between a pair of successive input poses computed directly from the input poses and using IMU pose prediction.
During estimation, optimization minimizes these errors. Lower errors indicate better alignment between the IMU pose predictions and the input poses, and thus better estimates. If you expect the input poses to be accurate, you can specify smaller prediction thresholds, as they require IMU pose predictions to align more closely with the input poses. Use larger thresholds when you expect the input poses to be less accurate.
Data Types: double
Output Arguments
Gravity rotation, returned as a 3-by-3 matrix, se3
object, or
rigidtform3d
(Image Processing Toolbox)
object similar to input pose type. It contains the rotation required to transform the
gravity vector from the local navigation reference frame of IMU (NED or ENU) to the
input pose reference frame.
Estimation diagnostics and optimization results, returned as a structure. The fields of the structure are:
Field | Description | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Status | Status of checks, returned as an enumeration with one of these values.
| ||||||||||||||
TranslationErrors | Translation errors in the IMU pose prediction, returned as an (N-1)-by-3 matrix. Each row represents the difference in relative translation between a pair of successive input poses computed directly from the input poses and using IMU pose prediction, in the form [x y z]. A lower translation error indicates that the relative translation predicted using IMU measurements is very close to visual odometry estimate. | ||||||||||||||
RotationErrors | Rotation errors in the IMU pose prediction, returned as an (N-1)-by-3 matrix. Each row represents the difference in relative rotation between a pair of successive input poses computed directly from the input poses and using IMU pose prediction, in the form [x y z]. A lower rotation error indicates that the relative rotation predicted using IMU measurements is very close to visual odometry estimate. | ||||||||||||||
InitialCost | Initial cost of the non-linear least squares problem formulated by the factor graph before the optimization. | ||||||||||||||
FinalCost | Final cost of the non-linear least squares problem formulated by the factor graph after the optimization.
Note Cost is the sum of error terms, known as residuals, where each residual is a function of a subset of factor measurements.
| ||||||||||||||
NumSuccessfulSteps | Number of iterations in which the solver decreases the cost. This value includes the initialization iteration at 0 in addition to the minimizer iterations. | ||||||||||||||
NumUnsuccessfulSteps | Number of iterations in which the iteration is numerically invalid or the solver does not decrease the cost. | ||||||||||||||
TotalTime | Total solver optimization time in seconds. | ||||||||||||||
TerminationType | Termination type as an integer in the range [0, 2]:
| ||||||||||||||
IsSolutionUsable | Solution is usable if | ||||||||||||||
PosesInIMUReference | Optimized poses transformed into the IMU navigation reference frame using the estimated gravity rotation, returned in the same format as the input poses. | ||||||||||||||
VelocityInIMUReference | Velocity estimates, returned as an N-by-3
numeric matrix, representing velocities in the IMU navigation reference
frame at input pose sampling times. You can specify these values can be set
as initial guesses for newly created velocity nodes in the visual-inertial
factor graph, by using the | ||||||||||||||
Bias | Bias estimates at the input pose sampling times, returned as an
N-by-6 numeric matrix. You can specify these values as
initial guesses for newly created bias nodes in the visual-inertial factor
graph, by using the | ||||||||||||||
PoseToIMUTransform | Transformation from the input pose reference frame to the IMU
navigation reference frame, returned as an |
Data Types: struct
More About
To improve the accuracy of visual odometry pose estimation, visual-inertial odometry fuses IMU and visual measurements using a factor graph. For this fusion to work, both measurements must be in a common coordinate frame. Accelerometer measurements include constant gravity acceleration, which does not contribute to motion and must be removed for accurate integration with other sensor data.
To remove gravity acceleration, you must know the gravity direction. As such, IMU measurements commonly use the north-east-down (NED) and east-north-up (ENU) local navigation reference frames, as the gravity direction is known in these frames. However, the input pose reference frame might not align with the local navigation reference frame of the IMU. In these cases, you must transform input poses to the local navigation reference frame to remove the effects of gravity.
Gravity rotation transforms the gravity vector from the local
navigation reference frame of the IMU to the input pose reference frame. As such, you can
use the inverse of the estimated gravity rotation from the
estimateGravityRotation
function to help transform input poses to the
local navigation reference frame of the IMU.
The accuracy of the parameter estimated by estimateGravityRotation
depends on the accuracy of the input pose estimates, IMU noise parameters, and sensor
transform. estimateGravityRotation
enables you to specify prediction
threshold as an input name-value argument, and returns debugging information in the
info
structure to validate the estimate. However, passing this validation
does not always guarantee successful estimation. To increase confidence in the results,
perform the estimation multiple times and evaluate the variance between the estimates. If
the variance across these estimations is low, then you can trust the parameter
estimates.
If the estimated parameter fail validation, consider these strategies to improve accuracy:
Modify the input arguments
FixPoses
,InitialVelocity
, andSensorTransform
. For the possible values of these input arguments, and their effects, see their respective argument descriptions.Use non-default IMU parameters. The default parameters specify identity noise covariances, which are significantly larger than typical IMU covariances in practical scenarios.
If trajectory data has been pre-acquired, select input poses from the trajectory such that the corresponding frames of the poses have nonzero acceleration between them along all axes.
If trajectory data has not been acquired yet, collect camera data and IMU sensor data at the start of navigation, ensuring moderate angular velocity and acceleration along all possible axes.
References
[1] Campos, Carlos, Richard Elvira, Juan J. Gomez Rodriguez, Jose M. M. Montiel, and Juan D. Tardos. “ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM.” IEEE Transactions on Robotics 37, no. 6 (December 2021): 1874–90. https://doi.org/10.1109/TRO.2021.3075644.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Version History
Introduced in R2023aYou can now specify these name-value arguments to improve the estimate:
FixPoses
— Improve estimation accuracy by specifying whether the input poses are fixed or non-fixed during estimation.InitialVelocity
— Compute bias accurately by specifying the initial velocity of the IMU.PredictionThreshold
— Filter unreliable estimate by specifying an IMU pose prediction threshold value.
The info
output structure now includes these additional fields:
Status
— Indicates the validity, or reason for failure, of the estimation.TranslationErros
andRotationErrors
— Indicate the difference in estimated translation and rotation, respectively, between the visual odometry estimate and the IMU measurement-based estimate.PoseToIMUTransform
— Transformation from the input pose reference frame to the IMU navigation reference frame.PosesInIMUReference
— Optimized poses in the IMU navigation reference frame.VelocityInIMUReference
andBias
— Velocity estimates in the IMU navigation reference frame and bias estimates, respectively.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: United States.
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)