- The tracker estimates the grid using a particle filter. The states of the particles are represented in the world coordinate frame (thus allowing state estimation in a global sense).
- The tracker projects the particles to the local grid using the ego-to-scenario transformation.
- The tracker projects the sensor data to the local grid using the sensor-to-ego transformation.
- Both particle data and sensor data gets fused at the local grid level.
Ego-motion compensation in the Grid-based Tracking in Urban Environment example
25 views (last 30 days)
Hello support team,
I am currently working on the grid-based tracker (trackerGridRFS) and have already been exploring the following example:
To further understand what is happening in detail inside the tracking algorithm, I was asking me, where the ego-vehicle motion compensation is placed. As far as I understand, the grid has to be transformed before it can be updated by new measurements (e.g. translating and rotating it by the vehicle's movement). I also searched in the support classes like "MeasurementEvidenceMap" but could not find it.
Thanks in advance for any hints and help!
Prashant Arora on 12 Nov 2020
The grid-based tracker (trackerGridRFS) estimates a local or ego-centric dynamic occupancy grid map i.e the dynamic occupancy grid map is always aligned with the current position and orientation of the ego vehicle. In order to estimate the dynamic grid map from sensor-level measurements, the tracker needs mainly two transforms. The first transform is required to account for position and orientation of the sensor with respect to the ego vehicle or grid. The second transform is required to account for position and orientation of the ego vehicle with respect to the world or scenario frame.
The tracker allows you to supply both these transforms at each step using the sensor configurations input. See SensorConfigurations and HasSensorConfigurationsInput property of the tracker.
In the example, this input is calculated by helper function helperGetLidarConfig provided here in the example. The helper uses the ground truth information about the ego vehicle to calculate this information. In real-world systems, this information about ego position and orientation is typically obtained by INS filters.
The tracker does the following for motion compensation:
Hope this helps.