Nov 2008

Lucky to have (access to) a cluster? Here some notes on how to do geospatial number crunching on it a.k.a. HPC (High Performance Computing).

Preparing the disks
We decided to use the ext3 file system. An initial problem was the formatting of the RAID5 disk set since it exceeded the file system specifications. Then, setting the ext3 block size to 4k instead of 1k we could format it.

Storage: a home for GIS data
The disks are available via NFS to all nodes (blades in our case). All raw/original data sets and the GRASS database are sitting in an NFS exported directory which I even link on my laptop to easily add/access/modify stuff.

Front-end machine and blades configuration
The cluster is a (currently) 56CPU blades system, we’ll expand to 128 CPUs later this year (16 blades with 2 procs a 4 core and 16GB RAM per blade). Additionally, we have a front-end machine to run the job manager and to link in further disks, all via NFS.
The blades are configured diskless, i.e. that once started, they receive their operating system from the front end machine via network (10GB/s ethernet). Like this, we have a single directory on the front end which contains all software, this is then propagated to all blades. Very convenient. We use Scientific Linux (the LiveDVD copied onto the disk, there is a special directory to store your modifications which are then merged in on the fly once you boot the blades, pretty cool concept). The job software is (SUN) Grid Engine, also free software. Job control with GRASS I have described here:

https://grass.osgeo.org/wiki/Parallel_GRASS_jobs
-> Grid Engine

GRASS: Avoiding replicated import of large data files through virtual linking
New in GRASS 6.4 is that you can just register a geodata file on the fly with r.external. Altogether I have 1.4TB of new GIS data from our province, naturally I didn’t want to by a new disk array just for my provincial GRASS location! Here r.external comes handy to minimize the “import” to a few bytes. As expected, it leverages GDAL to get data into GRASS, the overhead is minimal.

Power consumption
Power consumption is measured, too: The entire system consumes around 2000W (each blade less than 200W), so it’s going into the direction of “green” computing (there is no such thing!). If we had a solar panel at least…

Outcome
All in all a very nice solution. I made a stress test and removed all internal switches and shut down the blades while I was processing 8000 MODIS satellite maps. Everything survived and the Grid engine job manager collected the crashed jobs and restarted them without complaining. All resulting maps are collected in the target GRASS mapset and could be even exported to common GIS formats, if needed.
If you want to run Web Processing Services (e.g., pyWPS), you can likewise send each session to a node, giving you enormous possibilities for your customers.

Edit 2014:

“Big data” challenges on a cluster – limits and our solutions to overcome them:

  • 2008: internal 10Gb network connection way too slow
    • Solution: TCP jumbo frames enabled (MTU > 8000) to speed up the internal NFS transfer
  • 2009: hitting an ext3 filesystem limitation (not more than 32k subdirectories allowed but having more files in cell_misc/ as each GRASS GIS raster map consists of multiple files)
    • Solution: adopting XFS filesystem [yikes! …. all to be reformated, i.e. some terabyes had to be “parked” temporarily]
  • 2012: Free inodes on XFS exceeded
    • Solution: Update XFS version [err, reformat everything again]
  • 2013: I/O saturation in NFS connection between chassis and blades
    • Solution: reduction to one job per blade (queue management), 21 blades * 2.5 billion input pixels + 415 million output pixels
  • 2014: GlusterFS saturation
    • Solution: Buy and use a new 48 port switch, ti implement 8-channel trunking (= 8 Gb/s)


Reading GRASS data through GDAL/OGR support

Example 1: We write out a GRASS raster map to GeoTIFF — this format
includes the coordinates within the file’s metadata:

gdal_translate -of Gtiff /usr/local/share/grassdata/spearfish/PERMANENT/cellhd/soils soilmap.tif

ogr2ogr roadsmap.shp /usr/local/share/grassdata/spearfish/PERMANENT/vector/roads/head

Fast image display with tiling
If you want fast access you might want to try converting e.g. a BIL files to a tiled TIFF, and build overviews. You can build overviews for BIL too, but it can’t be directly tiled:

# add -co “PROFILE=BASELINE” for TIF/TFW
gdal_translate source_bil global30.tif -co “TILED=YES” -co “TFW=YES” -co “PROFILE=BASELINE”
gdaladdo global30.tif 2 4 8 16

GDAL performance problem?
GDAL_CACHEMAX is normally a number of megabytes (default is 10 or so). So something like:
gdal_translate -of GTIFF -co TILED=YES –config GDAL_CACHEMAX 120 madison_1f_01.jpg madison_1f_01.tif
would use a 120MB cache.

GDAL and 1 bit maps
With a trick you can get those:
gdal_merge.py -co NBITS=1 -o dst.tif src.tif

Generate 8 bit maps for Mapserver
gdal_translate -scale in.tif out.tif
Note: As of MapServer 4.4 support has been added for classifying non-8bit raster inputs

Greyscale conversion
A “proper” conversion would involve a colorspace transformation on the RGB image into IHS or something like that, and then taking the intensity. GRASS can do things like that.

Generate an OGC WKT (SRS)
In WKT the ellipsoid is described by two parameters: the semi-major axis and the inverse flattening. For a sphere the flattening is 0 and so the inverse flattening is infinity.

# in the GDAL source code:
cd apps
make testepsg
./testepsg ‘+proj=lcc +lat_1=35 +lat_2=65 +lat_0=52 +lon_0=10 +x_0=4000000 +y_0=2800000 +ellps=GRS80 +units=m’
Validate Succeeds.
WKT[+proj=lcc +lat_1=35 +lat_2=65 +lat_0=52 +lon_0=10 +x_0=4000000 +y_0=2800000 +ellps=GRS80 +units=m] =
PROJCS[“unnamed”,
GEOGCS[“GRS 1980(IUGG, 1980)”,
DATUM[“unknown”,
SPHEROID[“GRS80”,6378137,298.257222101]],
PRIMEM[“Greenwich”,0],
UNIT[“degree”,0.0174532925199433]],
PROJECTION[“Lambert_Conformal_Conic_2SP”],
PARAMETER[“standard_parallel_1”,35],
PARAMETER[“standard_parallel_2”,65],
PARAMETER[“latitude_of_origin”,52],
PARAMETER[“central_meridian”,10],
PARAMETER[“false_easting”,4000000],
PARAMETER[“false_northing”,2800000],
UNIT[“Meter”,1]]

Simplified WKT[+proj=lcc +lat_1=35 +lat_2=65 +lat_0=52 +lon_0=10 +x_0=4000000 +y_0=2800000 +ellps=GRS80 +units=m] =
PROJCS[“unnamed”,
GEOGCS[“GRS 1980(IUGG, 1980)”,
DATUM[“unknown”,
[..]

Extracting spatial subset (subregion)
W N E S
gdal_translate -of GTiff -projwin 636861 5152686 745617 5054047.5 \
p192r28_5t19920809_nn1.tif test1_utm.tif

Fixing broken projection/datum info for raster data
gdal_translate -of HFA -a_srs epsg:32735 /cdrom/173072lsat.img \
173072lsat_fixed.img

# or, using a WKT file
gdal_translate -of HFA -a_srs file.prj /cdrom/173072lsat.img \
173072lsat_fixed.img

Merge various import maps, re-project on the fly and extract spatial subset according to current GRASS region
eval `g.region -g`
gdalwarp -te $w $s $e $n *.TIF \
srtm_cgiar3_italy_north_LL.tif

Export to (limited) TIFF readers such as ArcView, or ImageMagick
Many tools have trouble reading multi-band TIFFs with “band interleaving”, the GDAL output default. Best is to use the INTERLEAVE=PIXEL creation option. Just add to the gdal_translate command line:
-co INTERLEAVE=PIXEL

Inserting metadata (metadata tags)
gdal_translate -outsize 37.5% 37.5% \
-mo TIFFTAG_XRESOLUTION=300 -mo TIFFTAG_YRESOLUTION=300 \
in.tif out.tif

Raster map reprojection (warping)
gdalwarp -t_srs ‘+init=epsg:26591 +towgs84=-225,-65,9’ test1.tif \
test1_gb.tif

Raster map reprojection (warping) maintaining NULL values (sea etc):

gdalwarp -r bilinear -tr 1000 1000 \
-srcnodata “-32768” -dstnodata “-32768” \
-wo “INIT_DEST=-32768” \
-t_srs epsg:32632 italy_LL.tif italy_UTM32.tif

Reprojecting external map to current GRASS location externally
gdalwarp -t_srs “`g.proj -wf`” aster.tif aster_tmerc.tif

Cut out region of interest with gdalwarp (in target coords)
Add to command line (insert values instead of letters of course:
#damn order, differs from -projwin!!
-te W S E N

Merging many small adjacent DEMs into one big map (A)
This needs GDAL compiled with Python and numpy installed:
# if not installed in standard site-packages directory
export PYTHONPATH=/usr/local/lib/python2.5/site-packages
gdal_merge.py -v -o spearfishdem.tif -n “-32768” d*.tif

Merging many small adjacent DEMs into one big map (B)
Even easier, just use gdalwarp:
gdalwarp C_1mX1m/dtm*.tif big.tif
Or just a few tiles:
gdalwarp C_1mX1m/dtm0010[4-5]* big_selection.tif

Merge various map/bands into a RGB composite
gdal_merge.py -of HFA -separate band1.img band2.img band3.img -o out.img

GDAL gdalwarp interpolation comments
Which method -rn, rb, -rc or -rcs should one use for DEM and which for data like e.g. Landsat TM reprojecting?

-tps: Enable use of thin plate spline transformer based on available GCPs.
-rn: Use nearest neighbour resampling (default, fastest algorithm, worst interpolation quality).
-rb: Use bilinear resampling.
-rc: Use cubic resampling.
-rcs: Use cubic spline resampling (slowest algorithm).

FrankW suggests:
I would suggest -rb for DEMs, and one of the cubic kernels for landsat data. Of course, there are various factors that you should take into account. Using -rb (bilinear) for the DEM will perform local averaging of the nearby pixel values in the source. This give reasonable results without introducing any risky “overshoot” effects you might see with cubic that could be disturbing for analysis or visualization in a DEM. The cubic should in theory do better at preserving edges and general visual crispness than using bilinar or nearest neighbour. However, if you are wanting to do analysis with the landsat (such as multispectral classification) I would suggest just using -rn (nearest neighbour) so as to avoid causing odd effects to the spectral values.
Nobody can’t tell you what method should be used in your case. Generally speaking, in the case of upsampling spline and cubic interpolators are more suitable (-rcs and -rc). In the case of downsampling and the same resolution it is completely up to you what method looks better. Just try them all and select the one which is most appropriate for you.

Geocoding with ‘gdal_translate’
FrankW suggests:
As far as I know there is not on-screen method for doing this, but it certainly isn’t too difficult with a little bit of semi-manual work. Open two OpenEV views, one with the unreferenced image, one with the geo-reference base you want to use. Move your cursor over the non-referenced one (let’s call it image1), record (read: write down!) the pixel x,y values. Then look at the same location in image2. Write down the geocoordinate for the pixel. You should have four numbers for each location you want to pin the image to. And so on and so on. Then use gdal_translate to translate image1.tif to image1_georefd.tif but adding the -GCP parameter for each set of coordinates. Like so…

gdal_translate -gcp 1 1 500000 5000000 \
-gcp 200 400 550000 5250000 image1.tif \
image1_geo.tif

Reading HDF ASTER
gdalinfo pg-PR1B0000-2002031402_100_001

To select a channel and warp to UTM (or whatever is inside):
gdalwarp HDF4_SDS:ASTER_L1B:”pg-PR1B0000-2002031402_100_001″:2 aster_2.tif
gdalinfo aster_2.tif

A new bugfix release of GDAL/OGR (now at V1.5.2) was released today. This stable branch bug fix release fixes 37 issues in various drivers.