a lesson on digital topography

Part 1. Finding Data
Part 2. GIS
Part 3. Zooming in and Cropping Out
Part 4. Generating Mesh
Part 5. Image Processing
Part 6. Image Processing Cont.

UPDATE : The GH definition has been replaced with a Rhino Python Script that runs significantly faster and will handle much bigger files. On an old laptop, generating a terrain with about 3 million polygons took about 1 min. Enjoy.

Since I literally have to re-teach this every time I get a new group of students, I figured it might be easier if I just put my lesson online.

As an architect who’s concerned with the interaction between our built and our ecological environment, we often need to negotiate design intent and desires with existing conditions. With the increasing availability of high resolution high accuracy data such as 1/9th arc sec NED and even LIDAR images, how we learn to process and extract information out of these datasets will redefine how we understand site and context.

Generating a good topo is a common task in architecture, urban design, and landscape architecture. Yet, there aren’t a lot of tools out there that allow you to generate 3D topo information as well as run a rigorous set of analysis based on the dataset, and allow you the freedom to mold and make changes to the existing condition. Such a seemingly simple task is ia multi-platform process that requires a much deeper understanding of the science and technology involved. So for this lesson, I’ll concentrate on breaking down this rather complex / cumbersome process and attempt to explain the sci/tech behind.

This lesson is divided into many sections. Finding a good source of data is probably the most important part of this process, which is also the most frustrating for most. There are many acronyms such as GTOPO30, SRTM1, SRTM3, SRTM30, DEM, NED1, NED1/3, NED1/9, getting to understand the different dataset will probably require more time most people have. But the general rule of thumb is to look at the number behind the name, which usually indicate the resolution of the dataset. The number typically refers arc seconds, 1 is 1 arc second, meaning each pixel in the images equates to roughly 30 meters. 3 arc second is about 90m, 1/3 arc second is 10 meters, and 1/9 meter is 3 meters. SRTM 3 covers the entire earth but 1 arc second and finer currently is only available in U.S. for free.

Once the image is downloaded, you will also need to project them properly. Many default projection systems the topo data use skews the image. For example, the perpendicular grid of Manhattan will become distorted with the default setting. So if you do not re-project the image, you will have a hard time matching it to other site data. The different sections in this tutorial will attempt to go through the entire process from downloading, re-projecting, cropping, meshing, and visualizing. It is long and visually cluttered for a blog format, I intend to repost this in a formatted PDF form in the future.

We will be using QGIS, Rhino 3D / Grasshopper 3D, Photoshop / GIMP, Multispec in this lesson.

NOTE: You will see screen grabs from my OS such as Windows, OSX, Linux, that is to illustrate the fact that this process is independent on your OS, for the most part. The only part that is platform specific is for mesh generation, which is done is Grasshopper 3D / Rhino 3D, which is still the most robust 3D platform out there. Buy a copy and support the company.


1. Finding DATA

1a. There’re a number of sites where you can find digital elevation data. High resolution data may or may not be available for free in your home country but in the US, data as high as 3 meter per pixel resolution is distributed freely by USGS’s National Map.

For other countries, search for  SRTM3. You may also contact your local university’s geography department, they may have access to higher resolution data for a fee, or free if you’re lucky. You may find SRTM and GTOPO data also from USGS’s EarthExplorer.




1b. The interface is similar to Google maps so it’s rather easy to navigate. In the search bar at the top, I’m going to search for Centralia, PA, a mining town that has a rather extraordinary landform and history.


1c. Click on Download Data on the upper right. You should see a grid overlaying the map. Click on the download button in the tile, it’ll bring you to another screen. The tile is pre-formatted so you will get the file a little quicker. Alternatively you can also draw your own boundary with the button next to the zoom in/out buttons.


1d. There are many different data available but in this lesson we’ll only download the satellite imagery and elevation data.


1e. You are now asked to choose the format for each dataset you prefer. 1/9 arc sec resolution data may or may not be available for your area, but 1/3 arc sec data should cover the entire U.S, choose which ever suits your need. For format, I chose arcgrid in this case, but QGIS will be able to process all the formats.


1f. For the satellite image, I choose the NAIP 4 band data, that means the satellite used to capture the image has 4 sensors capturing the red, green, blue and infrared spectrum. We will use the infrared to visualize vegetated areas and it is also an indicator of plant health.


1g. Enter your email and you’ll receive an email typically within 10 min with the links to the files. Clicking on the link will bring you back to USGS and initiate the download process. The site only allows for 5-6 concurrent downloads so be patient with the satellite image.



1i. The downloaded data will be quite confusing to most in terms of which is the actual file that contains the useful information. I always go by the rule of looking at the file size. The largest file will contain the information you want, just make sure to dig through all the folders.



2. GIS
Once the files have all been downloaded, we’ll use QGIS to process the datasets. QGIS is opensource and has a vibrant community of developers. The functionality really rivals some of the commercial GIS software out there. It’s GUI is very straight forward, but I will not be introducing the software here, please go on youtube to find relavent intro video on the software. All we will do in QGIS is re-project the image from WGS 84 (world geodesic system 1984), which is the default format of the image, to NAD 83 (North American Datum 1983 ), which is the projection system the satellite image uses and has less distortion. If you are planning to overlay vector data from Openstreetmaps or TIGER/Line from US Census Bureau, make sure to identify the projection ID.

2a. Drag and drop you NED data into QGIS, you’ll immediately see a greyed out image. That’s you NED data that’s hiding in plain sight. The image data that comes with the NED file is 32-bit so we will need to normalize that to fit our 8-bit display.


2b. Double click the NED layer to bring out the property window. Go to the Style tab, check Custom Min / Max Values, right below under Load Min / Max Values from Band, check Actual, and click on the Load button on the right. This will load the height values of each cell and find the min / max boundary. Under Contrast Enhancement, change to Stretch to Min / Max. Click Apply and you should see you NED data appear.



To explain what’s happening here, the image appeared grey because all the data sits in the middle range, between 246 and 570, inside a much larger data space (there’s a whole lot of to it than this, I might try to explain this further down). To properly display the data, you need to adjust the min / max value. It’s a bit of a nuisance but once you become aware of this, it’s quite powerful.

2c. Now drag the satellite image (the largest file) into the QGIS’ layer, once again, you’ll find yourself at a loss here because things didn’t just work… you can’t see the image. (QGIS version 2.0 and above will have automatic projection on as default, so you will see the image displayed properly but they still have different projection system, and we still need to do something about it.) This is due to the fact that difference agencies produced the 2 different set of data and they used different projection and datum systems. Double click on the layer to bring up the property window. Switch to the General Tab and under Coordinate Reference System you’ll find the system used for that layer. For this NED, it’s EPSG:4269 – NAD 83 but our satellite image uses EPSG:26918 – NAD83 / UTM zone 18N. We therefore need to re-project one to match the other.



2d. Since the satellite image suppose to have the least distortion, we’ll use that as a reference. First check the projection / datum system that’s being used by QGIS, that’s the main projection coordinate system used by the software, while each layer of data may have their own. You can find that at the bottom next to the globe icon. In this case it’s set to EPSG:4269.


2e. To change that to the projection system that is used by the satellite image, right click on the sat img’s layer, and click Set Project CRS from Layer


2f. Now we will re-project the elevation data to match the satellite image. Go to the top menu under Raster > Projections make sure input file is pointing to the elevation layer, check target SRS and match that to the satellite image. Click on Warp (Reproject). It will take a little while so be patient.



2g. This time you should see the satellite image overlaying nicely on the NED image. However, the satellite image is only a small tile within a larger site, we’ll need to put the rest of the tiles in. For now, just drag and drop the rest of the satellite image from you folder into QGIS.


2h. Now let’s add a layer vector data into QGIS. Openstreetmap.org distributes their vector data. Go online, find your location, check Format to Export to OpenStreetMap XML Data and click Export. Your file with .osm extension will be downloaded.


2i. Back in QGIS, go to Plugins > Fetch Python Plugins, and make sure OpenStreepMap plugin is installed.



2j. Now goto Manage Plugins and check to enable the plugin.



2k. A new menu item should appear. Now under Web > OpenStreetMap, click Load OSM from file. Select your downloaded .osm file.


2l. Once again, the vectors didn’t show up and that’s due to the different project system used in OSM. For those who know QGIS can say you can turn on ‘on the fly’ CRS transformation but I’ll recommend against that because we’re only using QGIS to save the information out to CAD, ‘on the fly’ CRS transformation only temporarily matches different CRS layers but when you export the information, you’ll be screwed…


2m. To change the projection of the OSM data, we’ll right click on the layer and click Save as. Check that CRS is Project CRS and Format is ESRI Shapefile. Click ok. Do the same to both the Line and Polygon layer.



3. Zooming in and Cropping Out
Now that we’ve brought into QGIS all the information we want to include, it’s time to zoom in and define our site boundary a little further. What we want to be able to do is go in and freehand sketch a boundary, and then crop everything else according to the new boundary.

3a. First go to Layer > New > New Shapefile Layer. This will be our temporary sketch layer and can be trashed afterward. Make sure in the Type option select Polygon and the CRS is same as the project.



3b. Back in the layer window you’ll see a new layer. Highlight the layer and click on the Pen icon in you menu bar to make the layer editable. And then click on the Add Feature icon to start drawing your new boundary.




3c. Now that you have sketched out your new boundary, you can use QGIS to generate a more proper rectangular boundary. Vector > Research Tools > Polygon from Layer Extents, and you should see a rectangular shape added to your layer.


3d. Next up we want to use the newly created boundary to crop all the data, the NED, sat img, and OSM. First we’ll deal with the sat img. Since they’re in small tiles, some of them intersect the new boundary, some of them don’t, so I want to merge them all into 1 image so I only need to crop once. To do that we go to Raster > Misc > Merge. Since the files are in different folders, put them all inside another folder and check Choose input directory instead of files, and check Recurse subdirectories, QGIS will cycle through all the folders and find all the .geotiff files and tile them. You should now see a new merged image and may remove the individual tiles.




3e. To crop the image, goto Raster > Extraction > Clipper. Give it a new name and save format as GEOTIFF and check Mask layer, and select the rectangular boundary. Do the same with the NED layer, once again save as geotiff. Once its done, change the color mapping to Pseudocolor, then turn on the transparency of the sat img, then you should get something like this.




3f. Cropping Vector data is similar, go to Vector > Geoprocessing Tools > Clipper, Choose the layer you want to be clipped, and the boundary to use. Do that with both the line and polygon layer.




4. Generating Mesh
Now we have processed all the data, we’re ready to take them outside of QGIS and into other design environments. We’ll try to generate a mesh out of the NED data. UPDATED – previous version of this lesson uses Rhino Grasshopper as a platform. It works but it’s highly limited due to the memory consumption issue of GH. This has been re-written in Rhino Python and is now a much more robust engine and it’s cross platform. Generating a 3 million polygon terrain took about 60 secs and 5 million one in about 90 secs. About 50,000 polygons per second on a 4 year old laptop. Not too bad.

4a. First we’ll convert the NED file. Raster > Conversion > Translate, we’ll be exporting the NED data into Arc/Info ASCII Grid. As the name suggest, this will convert the data to a delimited text file, which can then be read by many other platform easily. Once you’ve created the .asc file, open it with a text editor such as SubEtaEdit or NotePad++. The only thing you have to look out for if there is a No Data Value assigned. A no data value is only assigned if there is a mistake in the cropping process, meaning cropping outside of the image into an “empty” pixel zone. You will get an error message mentioning No Data Value, either delete that line in the .asc file, or regenerate the .asc file with a proper crop boundary.




4b.UPDATED Run the Rhino Python script. You will be prompted immediately to open another file, this is where you point to the Arc/Grid .asc file. The script will do the rest.


# Script written by Ted Ngai 4/26/2014
# copyright Environmental Parametrics, a research entity of atelier nGai
# This Rhinoscript automates the process of generating digital terrain
# model. File must be converted to .asc Arc/Grid ASCII format with 
# QGIS or GDAL library. 

# You should always reference the original metadata to determine
# the unit and scale of the file. And although z scale is always in meters
# depending on the projection system used, your x-y scale might be way off

import math
import time
import rhinoscriptsyntax as rs

#timer object
t1 = time.time()

#open and read the Arc/Grid file
fname = rs.OpenFileName("Open", "Arc/Grid ASCII Files (*.asc) |*.asc||")
f = open(fname)
lines = f.readlines()

# reading meta data from file 

ncol = int(ncol)
nrow = int(nrow)

dx = cellsize
dy = cellsize
s = 5

if n== 'NODATA_value':
    s = 6

# check for distortion from certain projection systems
if n != 'cellsize':
    s = 6

dx = float(dx)
dy = float(dy)

# for certain projection system, this would scale the x-y to appropriate size
if dx < 1:
    #calculate cellsize in meters
    r = 6378.137
    lat1 = float(ymin) * math.pi/180
    lat2 = (float(ymin) + float(dy)) * math.pi/180
    lon1 = float(xmin) * math.pi/180
    lon2 = (float(xmin) + float(dx)) * math.pi/180

    d = math.acos(math.sin(lat1)*math.sin(lat2)+math.cos(lat1)*math.cos(lat2)*math.cos(lon2-lon1))*r*1000
    theta = math.atan(float(dy)/float(dx))# * 180 / math.pi

    dx = math.cos(theta) * d
    dy = math.sin(theta) * d

#read heightfield data
z = []
for s in xrange (s, len(lines)):
    z.extend (lines[s].split())

#generate x and y range
x = []
y = []

for v in range(0,nrow):
    for u in xrange(0,ncol):

# generate face vertices
face = []
for n in range(0,(nrow-1)*(ncol)):
    if n%(ncol) != 0:

# generate vertex coordinates
vertices = []
for n in range(0,len(z)):

#make mesh in rhino

#timer object
t2 = time.time() - t1
timer = ('elapsed time : '+ str(t2) + ' seconds')

rs.MessageBox(timer, buttons=0, title=None)



4d. Next we'll need to check the scale of this topography. The vertical values are good because they came straight from the data file, the length and width, however, is generated arbitrarily so we need to go back to QGIS to bring more information in. Fortunately we already have the site boundary layer, we just need to save that file out as .dxf and bring into Rhino3D. So back in QGIS, right click on the boundary layer and click on Save As. Make sure the CRS is Project CXRS and format is Autocad DXF. Now import to Rhino and you should see the rectangle in the middle of no where, zoom all and move it back to the origin. Now use Scale 2D to scale the mesh to match the rectangle.




5. Image Processing
Now we have the mesh, it's time to go back to the images and get more out of them. First we'll take the NED data and bring it in Photoshop purely for presentation purpose. Then we'll go back to the sat img to look at other "bands" of data that is embedded within the file.

5a. First if you try to open the NED file (in GEOTIFF format), you get something like this, a white image. And just like what happened in QGIS, the data is hidden inside and we just need to bring it out.


5b. Notice on the file tab after the file name it says (Gray / 32), this geotiff file is a 32-bit grayscale image, and needs to be converted for photoshop. PS is not yet friendly to 32-bit formats, although it's getting increasingly popular in HDR imaging. So we need to reduce to 16-bit. Image > Mode > 16-bit. It'll then open up HDR Toning window, use Equalize Histogram. You should be able to see you NED image now.



5c. You want to change from Grayscale to RGB Color.


5d. Next is to add a Gradient Map, Image > Adjustments > Gradient Map


5e. Now choose your favorite gradient mapping or create your own. You should see the image changing in response to the colormap. Use it to reveal details, and yes at this point your data is no longer scientific, but purely for visual purpose. And you should get something like this at the end.




6. Image Processing Cont.
For this part we need to use MultiSpec to process the satellite imagery. As you may remember, this set of image contains 4 bands of data. Other common images such as Landsat 7 would contain 7 bands of data. It means that satellite used to capture these images has multiple sensors capturing different wavelengths. We humans only see RGB but for scientific research other wavelength can begin to tell us things we can't perceive visually. The NAIP satellite image contains the typical RGB band and NIR band (near infrared). If you were to just try to open this up in photoshop you'll get a faint image. It's because photoshop is not made for research, so we once again need to process the data.

6a. The software is very straight forward. Open Image, and a dialog box comes up. Change these settings, Magnification - 0, Stretch Linear, Min/Max: Clip 0% of Tails, Treat 0's as Black. Click ok and you should see something like this, a nice high res satellite image.




6b. Now Open Image again and change these settings while keeping the rest the same. Red - 4, Green - 1, Blue - 2. You're now creating what is called NRG image. The 4th Band of the satellite image is the Near Infra Red radiation that is reflected off vegetations. You now place that in the Red channel of the RGB image so the reflected NIR radiation shows up as the red component, a way researchers use to identify vegetation.



Finally, all the images generated can be used for texture mapping as well.


Comments are closed.