How can I map a specific location in MATLAB using the drone images with corresponding gps data latitude and longitude?
Afficher commentaires plus anciens
I have a tabulated data on matlab of the images I took using the drone with their latitude and longitude and I want to make a map like the image below to visualize the location of each image. How can I achieve this using MATLAB. Does anybody knows how to do it? Thank you for helping. 

2 commentaires
KSSV
le 27 Fév 2022
What data you have? You have (lat,lon) and images? Do you have the geographic reference of images?
Cristel Esmerna
le 27 Fév 2022
Réponses (2)
You can use geoplot: Plot line in geographic coordinates - MATLAB geoplot (mathworks.com)
You can ignore geolimits first, get a nice a view, query the limits and then set it.
lats = [34.07293; 34.08];
longs = [-118.44867; -118.45];
id_str = {'1';'2'}; % maybe use the file name instead or some timedate format
data = table(id_str,lats,longs)
geoplot(data.lats,data.longs,'*m')
text(data.lats,data.longs,data.id_str,'Color','m','FontSize',12)
geolimits([34.0452 34.0978],[-118.4800 -118.4])
geobasemap streets

3 commentaires
William Rose
le 27 Fév 2022
Excellent answer from @AndresVar. In my answer above, I assumed you wanted to plot the area covered by each image.
Cristel Esmerna
le 27 Fév 2022
AndresVar
le 1 Mar 2022
@Cristel Esmerna parse and collect the data in the loop but plot outside the loop otherwise you might need to use hold on
I don't know why it would show wrong location. Geoplot worked well with a location I got off googlemaps. Verify you lat and lon for just 1 photo on google maps.
William Rose
le 27 Fév 2022
0 votes
My interpretation of your question is that you would like to show shapes on a basemap that will indicate the area covered by each drone image. In general, each rectangular drone image will map to a curvy quadrilateral on the basemap. You know the lat and long of the corners your basemap image, and therefore you can easily compute the lat and long of any point in the basemap, by simple linear interpolation from the corner points.
Approach 1. For a start, let's assume the ground covered by the basemap is all at the same altitude. Then you need to known the lat and long of the drone, which you said you have. You also need to know its altitude above the ground and the 3D orientation of the camera (which is described by 3 parameters, or by the 3x3 rotation matrix, which only has 3 independent parameters, even though it has 9 elements). You need to know the camera projection properties, which is a non-trivial thing. If you have all that information, then you can compute the lat and long of the edge points of your image, and you can draw a shape on th basemap with those points.
If you do not know the camera orientation and camera projection properties, then you need to identify several fiducial points, also known as reference points, on the base map, and associate them with corresponding points in the drone image. Then you use the associated points to solve for the constants in the projection equations that relate the drone image to the base map. Depending on the equations you choose for the projection, you may end up with an over-determined set of equations to solve. This will happen if you have M constants to solve for in the projection equations, and you have more than M/2 fiducial points - because you get 2 identities from every fiducial point If the system is overdetermined, then you take a least-squares approach to estimate the constants.
Example: Assume the camera projection is affine, which means that straight lines on the ground get mapped to straight lines in the camera image. Each point on the base map has a latitude ϕand longitude λ. The points in the drone image are described by the pixel coordinates: row i, i=1:M, and column j, j=1:N. The affine projection equations can be written as
With 3 non-co-linear fiducial points, you can solve for the unknowns a11,...,a23. If you have more than 3 fiducial points, you use singular value decomposition to find the values of the six constants that minimize the errors between the actual and predicted basemap coordinates of the fiducial points. Once you have the six unknowns, you use the projection equations to determine the latitudes and longitudes of the corners of the image, and you draw straight lines betweeen those points on the basemap.
7 commentaires
Beth Carreon
le 10 Mar 2022
Can you give an example code for this? I would like to try creating a map from an image by determing the coordinates of its corners. Thank you so much for your help.
William Rose
le 10 Mar 2022
Modifié(e) : William Rose
le 11 Mar 2022
[edited: I transposed the matrix representation of the model]
I will try. In my answer above, I meant to include "Approach 2." just before "If you do not know..." I will write some code for Approach 2. The goal of this not-yet-written code is to take a simulated drone image and determine the lat,long coordinates of the corners, using fiducial points in the drone image.
The mathematical version of the problem is as follows:
We assume the image is related to the real world by a linear transformation:
In vector-matrix form, this is

We wish to use points in the image with known latitude and longitude (fiducial points) to estimate the elements of the 2x3 matrix A. Then we will use A to estimate the latitude and longitude of the image corners.
The simulated drone image is attached, in two forms: unmarked and marked. The marked version has 6 fiducial points. Notice that the fiducial points are all at about the same elevation. If the fiducial points are at significantly different elevations from each other, then this method may give less accurate results. More later.
William Rose
le 11 Mar 2022
The marked up image has six fiducial points. I used an image editing program to determine the row,column of each point, where 1,1 is the top left pixel. I used google maps in overhead view to estimate the lat,long of each fiducial point, by putting my cursor in the right spot and then left-clicking. Left-clicking produces a pop-up with the lat and long. I enter row, column, lat, long into a text file, to yield a text file with N rows and 4 columns (N=number of fiducial points). The latitudes and longitudes are in decimal format. For this example, the file of fiducial points is eastriver1coords.txt, attached.
The script reads in the file. The first two columns become array X (pixel coordinates). We add a trailing column of ones to X so that X has dimensions N-by-3. Columns 3,4 become array Y (geographic coordinates, N-by-2). We do singular value decomposition on X to get U,S,V, such that X=U*S*(Vtranspose), where S is diagonal. The best fit model is
A = V * inv(S) * U' * Y.
Model A is 3x2, as described above.
%imageMap.m W.Rose 20220310
%Use fiducial points to estimate latitude & longitude of image corners.
%Use file of fiducial point coordinates to estimatee affine transform
%from the image to geographic coordinates.
%Compute and display the latitude and longitude of the image corners.
%Input: text file with 4 columns and N rows. Columns are
%pixel row, pixel column, latitude, longitude
%Additional input in the code: text file name, image size
%Do singular value decomposition of X to get U,S,V: X=U*S*V'
%Best fit model is A=V*inv(S)*U'*Y.
clear
infile='eastriver1coordinates.txt'; %name of 4-column text file
imsize=[1076,694]; %image size (rows,columns)
data=load(infile,'-ascii');
[N,~]=size(data); %N=number of fiducial points
X=[data(:,1:2),ones(N,1)];
Y=data(:,3:4);
[U,S,V]=svd(X,'econ');
A=V*inv(S)*U'*Y;
Xcorner=[1,1,1;1,imsize(2),1;imsize(1),1,1;imsize,1]; %corner pixels
Ycorner=Xcorner*A; %latitude, longitude of corners
fprintf('Latitude, longitude of corners:\n');
fprintf('Top left: %.6f, %.6f\n',Ycorner(1,:));
fprintf('Top right: %.6f, %.6f\n',Ycorner(2,:));
fprintf('Bottom left: %.6f, %.6f\n',Ycorner(3,:));
fprintf('Bottom right: %.6f, %.6f\n',Ycorner(4,:));
The estimated corner coordinates, when plotted in google maps, are pretty good but not perfect. See here. The imperfection is probably because the 3D view images in google (which I used for my simulated drone image) are not simple affine transformations.
Possible changes, which would make it more user-friendly, but would not change the numerical results:
- Have the user specify the image file, and read the image file to determine the image size, instead of having the image size hard-coded in the script.
- Create a separate script to open the un-marked-up image file. Allow the user to click on points in the image to define fiducial points. Save the image with a new name, marked up to show the location and number of each fiducial point. Create a 2-column text file with the row and column coordinates of each fiducial point. After running this new script, the user would edit the text file to add columns 3 and 4, the latitude and longitude of each fiducial point.
- Write a CSV file containing the corner coordinates in a format that can be easily imported into google maps.
William Rose
le 13 Mar 2022
@Beth Carreon, A projective transformation is more appropriate than an affine for a drone image or a google 3D view image. That would be the next improvement to make int the code. A key difference between affine and projective is that in an affine transform, 1 city block that is near the camera and one city block far from the camera project to the same size, while a projective transformaiton allows the farther away city block to be smaller in the image. It is analogous to orthographic and isometric drawings (affine) versus perspective drawing (projective), in engineering drawings.
Beth Carreon
le 21 Mar 2022
@William Rose thank you for this comprehensive solution. Is there a way that I can get the same results using the GPS info of the image only, consisting of latitude - longitude - altitude, without determining the fiducial points? Thanks again for your help.
William Rose
le 21 Mar 2022
I am not sure I have correctly undesrstood what you want to do.
I assume you would like to turn your drone image into a map-like image. There are nice ways to do it, with imwarp(). imwarp warps the image and with the right inputs, it can take a drone image and warp it into what the image would look like if it were taken from very high straight overhead, i.e. warp it into a map projection. But imwarp() has to know a bunch of stuff to do it. The fiducial points are one way to calculate what imwarp needs to know.
You have a lat and long and altitude. Is that for the drone when it took the photo, or for a place on the image?
Please post yor question as a whole new question. It deserves its own thread. Include an image and the lat-long-alt info if you are able share, or, if you cannot share it, make up a simulated problem analgous to yours, perhaps using google maps and 3D view, like I did for you. There are people on this forum who know a lot more than me about this, who can give better help, although I will try. Include @William Rose, in the posting, so that I get an email about it, because I have not been chekcing Matlab ansers regularly for the last couple of weeks. Or we can take it off line if you prefer to share an email.
Beth Carreon
le 22 Mar 2022
@William Rose the GPS info is for the drone when it took the photo. I will post a question regarding this. Thank you for your help.
Catégories
En savoir plus sur Standard File Formats dans Centre d'aide et File Exchange
Produits
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
