Quantcast
Channel: Question and Answer » gdal
Viewing all articles
Browse latest Browse all 397

USGS FLT Hdr file interpretation

$
0
0

I have a legacy program that attempts to read USGS 1/3 arcsec data. When I compare results to GDAL, I find that the lat/lon to pixel conversion is off by 1 line. The reason (I think) is because the legacy code calculates the latitude of line 0 as

lat0 = yllcorner + (nrows – 1) * cellsize

where yllcorner, nrows and cellsize come from the .HDR file.

I see in the GDAL code in frmts/raw/ehdrdataset.cpp that there is no “subtract 1″ term:

dfULYMap = dfYLLCorner + nRows * dfYDim;

I’m assuming that GDAL is right. Can anyone explain why so that a noob can understand?

Thanks for the help.

-reilly.


Viewing all articles
Browse latest Browse all 397

Trending Articles