p i n u p s p a c e http://www.tedngai.net Thu, 04 Feb 2016 23:10:49 +0000 en-US hourly 1 http://wordpress.org/?v=3.5.1 urban coastal bathymetry http://www.tedngai.net/?p=1123 http://www.tedngai.net/?p=1123#comments Wed, 21 May 2014 04:53:34 +0000 ted http://www.tedngai.net/?p=1123 FEMA recently released their updated flood maps – Flood Insurance Rate Maps (FIRMs), and there has been some visualizations that attempt to illustrate the what our cities will look like as a result of this rise.

Zone maps are useful for policy making but flows on terrain is much more interesting than its boundary. To allow the complexity to come through I joined USGS’s GMTED from 2010 and NOAA’s bathymetry data recently released for storm surge research, and juxtapose that with U.S. Census Bureau’s Metropolitan Statistical Area (MSA) boundary indicating major urban areas.

The resulting imagery allows one gain a much wider perspective in understanding the relationship between urbanization, geomorphology, and potential impact caused sea level change.

- All blue and blue-green color is below current sea level and color shows depth.
- Red zone is from current sea level to 1 meter.
- Orange zone is 1 to 5 meters.
- Above 5 meter is shown as grey scale. dark grey is low elevation and light grey is high elevation.

Both red and orange zones represent areas that are currently subject to storm surge events and future sea level rise. Hatched areas are metropolitans defined by Census – Metropolitan Statistical Area (MSA).

BayArea
Florida
Georgia
Maine
MidAtlanticBathymetry
NewEngland
NewOrleans
NorthWest
NY-Philly
SoCal
VirginiaBeachNC

]]>
http://www.tedngai.net/?feed=rss2&p=1123 0
GIS data processing with python http://www.tedngai.net/?p=1061 http://www.tedngai.net/?p=1061#comments Sun, 06 Apr 2014 03:29:25 +0000 ted http://www.tedngai.net/?p=1061 1. Introduction to GIS data processing

There are many sources of public data available in the US that can be used to create spatial visualizations, such as the Census Bureau, American Community Survey, Agricultural Census, County Health Ranking…etc. However, datasets typically come in a wide variety of formats and require some type of process to be suitable for use with GIS softwares. This tutorial will use an example dataset Local Area Personal Income (CA1-3) from the Bureau of Economic Analysis for its relative complexity involving multiples files and format conversion. This data will be used to join with the US county map distributed the Census Bureau.

API_web2. Preparations

The following is the list opensource / cross platform software used in this tutorial.

QGIS - http://www.qgis.org/en/site/
Enthought Canopyhttps://www.enthought.com/products/canopy/
Python dbf library - https://pypi.python.org/pypi/dbf/0.95.004

The following data is used with this tutorial.

TIGER/Line Shapefile US County - ftp://ftp2.census.gov/geo/tiger/TIGER2013/COUNTY/tl_2013_us_county.zip
Local Area Personal Income CA1-3 - http://www.bea.gov/regional/downloadzip.cfm

If you’re new to Python, Enthought Canopy is a IDE for learning. Downlaod the software and name a new file, save your file to a folder and make sure to put .py in your file extension. In the iPython window in the lower right, click “Change to Editor Directory” and check “Keep Directory Synced to Editor”. The folder you saved your script to is now your working directory, that should be the folder where you unzipped the CA1-3 file, and also put the dbf.py file from the dbf library.

3. Importing Data into Python

The CA1-3 data unzips into 59 files, 52 State files, 2 metadata, and 5 Metropolitan / Microplitan subdivisions. This tutorial will concentrate on pairing all the county information with the TIGER/Line, so only keep the 52 State files in the folder and move the rest to a different folder.

The GIS county shapefile uses the 5-digit FIPS county code to identify all the counties, and the CA1-3 data also uses the same GeoFIPS code, so the goal would be to match the two dataset through this column of data. However, upon closer inspection of the CA1-3 data, each FIPS is used 3 times to represent Total Personal Income, Total Population, and Per Capita Personal Income. To make this data work for GIS, we will need to write a code that will only pick out the one piece of data within this dataset. This can easily be done with the Pandas library in Python.

#import the pandas library
import pandas as pd

#locate one of the csv file
read_csv = ("./CA1-3/CA1-3_AK.csv" )

#import the csv file into python
st = pd.read_csv(read_csv)

Upon running the code, an EOF error is returned for line 106. Line 104 to 106 in the csv file contain 3 lines of notes. Since Pandas cannot process those notes, we need to come up with another code that would eliminate the notes. And this is an oppportunity to introduce another CSV library for reading and writing data files as a way to manage the data.

Since all the state files contain these 3 lines of notes beginning with “Note: See the included footnote file.”, a piece of code can be written to take advantage of this pattern and the code can be written to read the original data file and write it directly into a new file until it sees the note, and stop writing.

import csv

#locate one of the csv file
read_csv = ("./CA1-3/CA1-3_AK.csv" )
#write the modified data to a new csv file
write_csv = ("./CA1-3/CA1-3_AK_mod.csv")

with open(write_csv,'wb') as f, open(read_csv,'rb') as w:
    writer = csv.writer(f, delimiter = ',', quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
    reader = csv.reader(w, delimiter = ',', quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
    for row in reader:
        if row == ['Note: See the included footnote file.']:
            break
        else:
            writer.writerow(row)

Once the code has been executed we get a new file named original_mod.csv. This is the file we will use to bring into Python with pandas.

import csv
import pandas as pd

#locate one of the csv file
read_csv = ("./CA1-3/CA1-3_AK.csv" )
#write the modified data to a new csv file
write_csv = ("./CA1-3/CA1-3_AK_mod.csv")

with open(write_csv,'wb') as f, open(read_csv,'rb') as w:
    writer = csv.writer(f, delimiter = ',', quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
    reader = csv.reader(w, delimiter = ',', quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
    for row in reader:
        #print row
        if row == ['Note: See the included footnote file.']:
            break
        else:
            writer.writerow(row)

st = pd.read_csv(write_csv)

Now that the data is in python through pandas, we can continue to process the as mentioned above. The idea is to only pick out entries that relates to average personal income, which has the Linecode of 3, we can then deploy the following code to trim the dataset to that specfic criteria.

import csv
import pandas as pd

#locate one of the csv file
read_csv = ("./CA1-3/CA1-3_AK.csv" )
#write the modified data to a new csv file
write_csv = ("./CA1-3/CA1-3_AK_mod.csv")

with open(write_csv,'wb') as f, open(read_csv,'rb') as w:
    writer = csv.writer(f, delimiter = ',', quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
    reader = csv.reader(w, delimiter = ',', quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
    for row in reader:
        #print row
        if row == ['Note: See the included footnote file.']:
            break
        else:
            writer.writerow(row)

st = pd.read_csv(write_csv)

st = st[st["LineCode"]==3]

To look at the dataset now with the command: print st, we can see the rows of data has been reduced to 34 from 102. The next thing is to reduce the number of columns as many of them do not contain relevant information for GIS, and to change the NO DATA value, in this format represented by “(NA)”, to 0, so all the data can be properly represented as numeric values. Finally, to fix the issue with the FIPS number as the 0 digit has been dropped because the data is considered as a number. We need to change it so it’s considered as text.

import csv
import pandas as pd

#locate one of the csv file
read_csv = ("./CA1-3/CA1-3_AK.csv" )
#write the modified data to a new csv file
write_csv = ("./CA1-3/CA1-3_AK_mod.csv")

with open(write_csv,'wb') as f, open(read_csv,'rb') as w:
    writer = csv.writer(f, delimiter = ',', quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
    reader = csv.reader(w, delimiter = ',', quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
    for row in reader:
        #print row
        if row == ['Note: See the included footnote file.']:
            break
        else:
            writer.writerow(row)

st = pd.read_csv(write_csv)

for s in xrange (0, len(st)):
    st.loc[s] = st.loc[s].replace('(NA)',0)
    st.loc[s,['GeoFIPS']] = (format(st['GeoFIPS'].values[s],'05d'))

#keep only entries with LineCode 3
st = st[st["LineCode"]==3]

#dropping the irrelevant fields
st = st.drop(['Region'], axis = 1)
st = st.drop(['Table'], axis = 1)
st = st.drop(['LineCode'], axis = 1)
st = st.drop(['IndustryClassification'], axis = 1)
st = st.drop(['Description'], axis = 1)

See the data with the command: print st, you should notice the index changed due to the line of code that drops all the entries other than ones with LineCode == 3. The leftmost column is the index value and you can see it skips every 2 numbers. Although it’s not a big problem now but we need to re-index all the rows to start from 0 for data processing down the line. Write the modified version to the same CSV file under the same name. Add the following code to the end.

#reindex the data to start from 0
st = pd.DataFrame(st.values, index = range(0,len(st)), columns = st.columns)

st.to_csv(write_csv, index = False, header = True, quotechar = '"', quoting=csv.QUOTE_NONNUMERIC)

4. Convert CSV format to DBF

Shapefiles uses an old ascii file format to store data. Although the format is quite robust, there’re a few limitations we have to deal with. The following code will illustrate the nuances of working with dbf files.

We need to define the field names, field type and the number of characters. Essentially we need to convert the column names and make a single line of string. To see the column names we use the following command.

In [34]: st.columns
Out[34]: Index([u'GeoFIPS', u'GeoName', u'1969', u'1970', u'1971', u'1972', u'1973', u'1974', u'1975', u'1976', u'1977', u'1978', u'1979', u'1980', u'1981', u'1982', u'1983', u'1984', u'1985', u'1986', u'1987', u'1988', u'1989', u'1990', u'1991', u'1992', u'1993', u'1994', u'1995', u'1996', u'1997', u'1998', u'1999', u'2000', u'2001', u'2002', u'2003', u'2004', u'2005', u'2006', u'2007', u'2008', u'2009', u'2010', u'2011', u'2012'], dtype='object')

With DBF format the field names need to have definition for its field type. C(5) means it’s a character with 5 charater entries, N(11,0) means it’s a numeric value with 11 digits and 0 decimal points. The first column name is the FIPS code and it should be considered as a text so the 0 digit don’t get dropped, and it only has 5 digits. The 2nd column name varies a lot but giving it 50 characters should be plenty. The rest of the columns are income values and 11 digits seem to be the maximum.

import dbf

fnames = ''
for n in xrange (0, len(st.columns)):
    if n == 0:
        fname = (st.columns[n] + ' C(5); ')
        fnames = fnames + fname
    elif n == 1:
        fname = (st.columns[n] + ' C(50); ')
        fnames = fnames + fname
    elif n > 1 and n < len(st.columns)-1:
        fname = ('N' + st.columns[n] + ' N(11,0); ') 
        fnames = fnames + fname
    elif n == len(st.columns)-1:
        fname = ('N' + st.columns[n] + ' N(11,0)')
        fnames = fnames + fname
In [35]: fnames 
Out[35]: 'GeoFIPS C(5); GeoName C(50); N1969 N(11,0); N1970 N(11,0); N1971 N(11,0); N1972 N(11,0); N1973 N(11,0); N1974 N(11,0); N1975 N(11,0); N1976 N(11,0); N1977 N(11,0); N1978 N(11,0); N1979 N(11,0); N1980 N(11,0); N1981 N(11,0); N1982 N(11,0); N1983 N(11,0); N1984 N(11,0); N1985 N(11,0); N1986 N(11,0); N1987 N(11,0); N1988 N(11,0); N1989 N(11,0); N1990 N(11,0); N1991 N(11,0); N1992 N(11,0); N1993 N(11,0); N1994 N(11,0); N1995 N(11,0); N1996 N(11,0); N1997 N(11,0); N1998 N(11,0); N1999 N(11,0); N2000 N(11,0); N2001 N(11,0); N2002 N(11,0); N2003 N(11,0); N2004 N(11,0); N2005 N(11,0); N2006 N(11,0); N2007 N(11,0); N2008 N(11,0); N2009 N(11,0); N2010 N(11,0); N2011 N(11,0); N2012 N(11,0)'

Now all the columns names are in this single string, we will use this to make a new dbf file.

table = dbf.Table("CA1-3_AK", fnames)

You should see a new file in CA1-3_AK.dbf in your working directory. The next steip is to get the data from CSV file and format it into DBF. From the Python DBF website - http://pythonhosted.org//dbf/, we learn that the data entry needs to be in the following format, ('John Doe', 31, dbf.Date(1979, 9,13)), ('Ethan Furman', 102, dbf.Date(1909, 4, 1)). That means we need to turn each row of the data into a single string, and append that to the DBF file.

table.open() 
for s in xrange (0, len(st)): 
    record = [] 
    for fn in st.columns: 
        record.append(st.loc[s,fn]) 
    table.append(tuple(record)) 
table.close()

This is all the processing we need to convert a single CSV file from the BEA set and make a DBF file that can be used with GIS. The whole code would look like this.

import csv
import pandas as pd
import dbf

#locate one of the csv file
read_csv = ("./CA1-3/CA1-3_AK.csv" )
#write the modified data to a new csv file
write_csv = ("./CA1-3/CA1-3_AK_mod.csv")

with open(write_csv,'wb') as f, open(read_csv,'rb') as w:
    writer = csv.writer(f, delimiter = ',', quotechar='"', quoting=csv.QUOTE_NONNUMERIC) 
    reader = csv.reader(w, delimiter = ',', quotechar='"', quoting=csv.QUOTE_NONNUMERIC)

    for row in reader:
        #print row 
        if row == ['Note: See the included footnote file.']:
            break
        else:
            writer.writerow(row) 
            st = pd.read_csv(write_csv)

for s in xrange (0, len(st)):
    st.loc[s] = st.loc[s].replace('(NA)',0)
    st.loc[s,['GeoFIPS']] = (format(st['GeoFIPS'].values[s],'05d'))

#keep only entries with LineCode 3
st = st[st["LineCode"]==3]

#dropping the irrelevant fields
st = st.drop(['Region'], axis = 1)
st = st.drop(['Table'], axis = 1)
st = st.drop(['LineCode'], axis = 1)
st = st.drop(['IndustryClassification'], axis = 1)
st = st.drop(['Description'], axis = 1)

#reindex the data to start from 0
st = pd.DataFrame(st.values, index = range(0,len(st)), columns = st.columns) 

st.to_csv(write_csv, index = False, header = True, quotechar = '"', quoting=csv.QUOTE_NONNUMERIC)

fnames = '' 
for n in xrange (0, len(st.columns)):
    if n == 0:
        fname = (st.columns[n] + ' C(5); ')
        fnames = fnames + fname

    elif n == 1: 
        fname = (st.columns[n] + ' C(50); ')
        fnames = fnames + fname

    elif n > 1 and n < len(st.columns)-1:
        fname = ('N' + st.columns[n] + ' N(11,0); ')
        fnames = fnames + fname

    elif n == len(st.columns)-1: 
        fname = ('N' + st.columns[n] + ' N(11,0)') 
        fnames = fnames + fname 

table = dbf.Table("CA1-3_AK", fnames) 

table.open()
for s in xrange (0, len(st)):
    record = []
    for fn in st.columns:
        record.append(st.loc[s,fn]) 
    table.append(tuple(record))

#change dbf status to not read/writable
table.close()

Now since there're 52 files to process, the code will need to be generalized to be able to cycle through all the variations. In addition, some error trapping code needs to be added to avoid appending over exiting csv or dbf files. Here is the full code that would allow you to cycle through all 52 files inside ./CA1-3 folder and write and single dbf file.

import csv
import pandas as pd
import os.path
import dbf

list_csv = "./CA1-3/"
list_names = []
linecode = 1

for file in os.listdir(list_csv):
    if file.endswith(".csv"):
        list_names.append(file)

for i in list_names:

    read_csv = ("./CA1-3/" + i )
    write_csv = ("./CA1-3_mod/" + i )

    #check if write folder exist
    d = os.path.dirname(write_csv)
    if not os.path.exists(d):
        print "path don't exist... making folders for your file"
        os.makedirs(d)

    if os.path.isfile(write_csv):
        print (write_csv + " file already exist")

    else:
        with open(write_csv,'wb') as f, open(read_csv,'rb') as w:
            writer = csv.writer(f, delimiter = ',', quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
            reader = csv.reader(w, delimiter = ',', quotechar='"', quoting=csv.QUOTE_NONNUMERIC)

            for row in reader:

                #print row
                if row == ['Note: See the included footnote file.']:
                    break
                else:
                    writer.writerow(row)

        st = pd.read_csv(write_csv)

        for s in xrange (0, len(st)):
            st.loc[s] = st.loc[s].replace('(NA)',0)
            st.loc[s,['GeoFIPS']] = (format(st['GeoFIPS'].values[s],'05d'))

        tt = st.transpose()

        for t in xrange (1969, 1969+len(tt)-7):
            tt.loc[str(t)] = tt.loc[str(t)].astype(float)

        st = tt.transpose()

        #keep only entries with LineCode 3
        st = st[st["LineCode"]==linecode]

        #dropping the irrelevant fields
        st = st.drop(['Region'], axis = 1)
        st = st.drop(['Table'], axis = 1)
        st = st.drop(['LineCode'], axis = 1)
        st = st.drop(['IndustryClassification'], axis = 1)
        st = st.drop(['Description'], axis = 1)

        #reindex the data to start from 0
        st = pd.DataFrame(st.values, index = range(0,len(st)), columns = st.columns)

        st.to_csv(write_csv, index = False, header = True, quotechar = '"', quoting=csv.QUOTE_NONNUMERIC)

        fnames = ''
        for n in xrange (0, len(st.columns)):
            if n == 0:
                fname = (st.columns[n] + ' C(5); ')
                fnames = fnames + fname
            elif n == 1:
                fname = (st.columns[n] + ' C(50); ')
                fnames = fnames + fname
            elif n > 1 and n < len(st.columns)-1:
                fname = ('N' + st.columns[n] + ' N(11,0); ')
                fnames = fnames + fname
            elif n == len(st.columns)-1:
                fname = ('N' + st.columns[n] + ' N(11,0)')
                fnames = fnames + fname

        #aggregate all data to single dbf file.
        if os.path.isfile(list_csv[2:-1]+'.dbf'):
            print (i[:-4] + '...')
        else:
            #make table and use filename with the last 4 characters truncated
            table = dbf.Table(list_csv[2:-1], fnames)

        #change dbf status to read/write
        table.open()

        for s in xrange (0, len(st)):
            record = []
            for fn in st.columns:
                record.append(st.loc[s,fn])
            table.append(tuple(record))

        #change dbf status to not read/writable
        table.close()

        print (i+" completed successfully...")

5. Joining with Shapefile

Finally, drag and drop the TIGER/Line county shapefile as well as the dbf file in QGIS. Double click on the shapefile and go to the Joins tab. Click Add and your DBF file should show up. Change the Target Field to GEOID, and the data is now paired with the county map.

BEA_0

BEA_1

BEA_2

]]>
http://www.tedngai.net/?feed=rss2&p=1061 0
a lesson on digital topography http://www.tedngai.net/?p=872 http://www.tedngai.net/?p=872#comments Sun, 24 Feb 2013 20:46:38 +0000 ted http://www.tedngai.net/?p=872 Part 1. Finding Data
Part 2. GIS
Part 3. Zooming in and Cropping Out
Part 4. Generating Mesh
Part 5. Image Processing
Part 6. Image Processing Cont.

UPDATE : The GH definition has been replaced with a Rhino Python Script that runs significantly faster and will handle much bigger files. On an old laptop, generating a terrain with about 3 million polygons took about 1 min. Enjoy.

Since I literally have to re-teach this every time I get a new group of students, I figured it might be easier if I just put my lesson online.

As an architect who’s concerned with the interaction between our built and our ecological environment, we often need to negotiate design intent and desires with existing conditions. With the increasing availability of high resolution high accuracy data such as 1/9th arc sec NED and even LIDAR images, how we learn to process and extract information out of these datasets will redefine how we understand site and context.

Generating a good topo is a common task in architecture, urban design, and landscape architecture. Yet, there aren’t a lot of tools out there that allow you to generate 3D topo information as well as run a rigorous set of analysis based on the dataset, and allow you the freedom to mold and make changes to the existing condition. Such a seemingly simple task is ia multi-platform process that requires a much deeper understanding of the science and technology involved. So for this lesson, I’ll concentrate on breaking down this rather complex / cumbersome process and attempt to explain the sci/tech behind.

This lesson is divided into many sections. Finding a good source of data is probably the most important part of this process, which is also the most frustrating for most. There are many acronyms such as GTOPO30, SRTM1, SRTM3, SRTM30, DEM, NED1, NED1/3, NED1/9, getting to understand the different dataset will probably require more time most people have. But the general rule of thumb is to look at the number behind the name, which usually indicate the resolution of the dataset. The number typically refers arc seconds, 1 is 1 arc second, meaning each pixel in the images equates to roughly 30 meters. 3 arc second is about 90m, 1/3 arc second is 10 meters, and 1/9 meter is 3 meters. SRTM 3 covers the entire earth but 1 arc second and finer currently is only available in U.S. for free.

Once the image is downloaded, you will also need to project them properly. Many default projection systems the topo data use skews the image. For example, the perpendicular grid of Manhattan will become distorted with the default setting. So if you do not re-project the image, you will have a hard time matching it to other site data. The different sections in this tutorial will attempt to go through the entire process from downloading, re-projecting, cropping, meshing, and visualizing. It is long and visually cluttered for a blog format, I intend to repost this in a formatted PDF form in the future.

We will be using QGIS, Rhino 3D / Grasshopper 3D, Photoshop / GIMP, Multispec in this lesson.

NOTE: You will see screen grabs from my OS such as Windows, OSX, Linux, that is to illustrate the fact that this process is independent on your OS, for the most part. The only part that is platform specific is for mesh generation, which is done is Grasshopper 3D / Rhino 3D, which is still the most robust 3D platform out there. Buy a copy and support the company.

topo_wire

1. Finding DATA

1a. There’re a number of sites where you can find digital elevation data. High resolution data may or may not be available for free in your home country but in the US, data as high as 3 meter per pixel resolution is distributed freely by USGS’s National Map.

For other countries, search for  SRTM3. You may also contact your local university’s geography department, they may have access to higher resolution data for a fee, or free if you’re lucky. You may find SRTM and GTOPO data also from USGS’s EarthExplorer.

NED_Release_Notes_Dec12

topo_tut_01

topo_tut_02

1b. The interface is similar to Google maps so it’s rather easy to navigate. In the search bar at the top, I’m going to search for Centralia, PA, a mining town that has a rather extraordinary landform and history.

topo_tut_03

1c. Click on Download Data on the upper right. You should see a grid overlaying the map. Click on the download button in the tile, it’ll bring you to another screen. The tile is pre-formatted so you will get the file a little quicker. Alternatively you can also draw your own boundary with the button next to the zoom in/out buttons.

topo_tut_04

1d. There are many different data available but in this lesson we’ll only download the satellite imagery and elevation data.

topo_tut_05

1e. You are now asked to choose the format for each dataset you prefer. 1/9 arc sec resolution data may or may not be available for your area, but 1/3 arc sec data should cover the entire U.S, choose which ever suits your need. For format, I chose arcgrid in this case, but QGIS will be able to process all the formats.

topo_tut_06

1f. For the satellite image, I choose the NAIP 4 band data, that means the satellite used to capture the image has 4 sensors capturing the red, green, blue and infrared spectrum. We will use the infrared to visualize vegetated areas and it is also an indicator of plant health.

topo_tut_07

1g. Enter your email and you’ll receive an email typically within 10 min with the links to the files. Clicking on the link will bring you back to USGS and initiate the download process. The site only allows for 5-6 concurrent downloads so be patient with the satellite image.

topo_tut_08

topo_tut_09

1i. The downloaded data will be quite confusing to most in terms of which is the actual file that contains the useful information. I always go by the rule of looking at the file size. The largest file will contain the information you want, just make sure to dig through all the folders.

topo_tut_10

topo_tut_14

2. GIS
Once the files have all been downloaded, we’ll use QGIS to process the datasets. QGIS is opensource and has a vibrant community of developers. The functionality really rivals some of the commercial GIS software out there. It’s GUI is very straight forward, but I will not be introducing the software here, please go on youtube to find relavent intro video on the software. All we will do in QGIS is re-project the image from WGS 84 (world geodesic system 1984), which is the default format of the image, to NAD 83 (North American Datum 1983 ), which is the projection system the satellite image uses and has less distortion. If you are planning to overlay vector data from Openstreetmaps or TIGER/Line from US Census Bureau, make sure to identify the projection ID.

2a. Drag and drop you NED data into QGIS, you’ll immediately see a greyed out image. That’s you NED data that’s hiding in plain sight. The image data that comes with the NED file is 32-bit so we will need to normalize that to fit our 8-bit display.

topo_tut_11

2b. Double click the NED layer to bring out the property window. Go to the Style tab, check Custom Min / Max Values, right below under Load Min / Max Values from Band, check Actual, and click on the Load button on the right. This will load the height values of each cell and find the min / max boundary. Under Contrast Enhancement, change to Stretch to Min / Max. Click Apply and you should see you NED data appear.

topo_tut_12

topo_tut_13

To explain what’s happening here, the image appeared grey because all the data sits in the middle range, between 246 and 570, inside a much larger data space (there’s a whole lot of to it than this, I might try to explain this further down). To properly display the data, you need to adjust the min / max value. It’s a bit of a nuisance but once you become aware of this, it’s quite powerful.

2c. Now drag the satellite image (the largest file) into the QGIS’ layer, once again, you’ll find yourself at a loss here because things didn’t just work… you can’t see the image. (QGIS version 2.0 and above will have automatic projection on as default, so you will see the image displayed properly but they still have different projection system, and we still need to do something about it.) This is due to the fact that difference agencies produced the 2 different set of data and they used different projection and datum systems. Double click on the layer to bring up the property window. Switch to the General Tab and under Coordinate Reference System you’ll find the system used for that layer. For this NED, it’s EPSG:4269 – NAD 83 but our satellite image uses EPSG:26918 – NAD83 / UTM zone 18N. We therefore need to re-project one to match the other.

topo_tut_16

topo_tut_17

2d. Since the satellite image suppose to have the least distortion, we’ll use that as a reference. First check the projection / datum system that’s being used by QGIS, that’s the main projection coordinate system used by the software, while each layer of data may have their own. You can find that at the bottom next to the globe icon. In this case it’s set to EPSG:4269.

topo_tut_19

2e. To change that to the projection system that is used by the satellite image, right click on the sat img’s layer, and click Set Project CRS from Layer

topo_tut_20

2f. Now we will re-project the elevation data to match the satellite image. Go to the top menu under Raster > Projections make sure input file is pointing to the elevation layer, check target SRS and match that to the satellite image. Click on Warp (Reproject). It will take a little while so be patient.

topo_tut_18

topo_tut_23

2g. This time you should see the satellite image overlaying nicely on the NED image. However, the satellite image is only a small tile within a larger site, we’ll need to put the rest of the tiles in. For now, just drag and drop the rest of the satellite image from you folder into QGIS.

topo_tut_24

2h. Now let’s add a layer vector data into QGIS. Openstreetmap.org distributes their vector data. Go online, find your location, check Format to Export to OpenStreetMap XML Data and click Export. Your file with .osm extension will be downloaded.

topo_tut_25

2i. Back in QGIS, go to Plugins > Fetch Python Plugins, and make sure OpenStreepMap plugin is installed.

topo_tut_26

topo_tut_27

2j. Now goto Manage Plugins and check to enable the plugin.

topo_tut_28

topo_tut_29

2k. A new menu item should appear. Now under Web > OpenStreetMap, click Load OSM from file. Select your downloaded .osm file.

topo_tut_30

2l. Once again, the vectors didn’t show up and that’s due to the different project system used in OSM. For those who know QGIS can say you can turn on ‘on the fly’ CRS transformation but I’ll recommend against that because we’re only using QGIS to save the information out to CAD, ‘on the fly’ CRS transformation only temporarily matches different CRS layers but when you export the information, you’ll be screwed…

topo_tut_31

2m. To change the projection of the OSM data, we’ll right click on the layer and click Save as. Check that CRS is Project CRS and Format is ESRI Shapefile. Click ok. Do the same to both the Line and Polygon layer.

topo_tut_32

topo_tut_34

3. Zooming in and Cropping Out
Now that we’ve brought into QGIS all the information we want to include, it’s time to zoom in and define our site boundary a little further. What we want to be able to do is go in and freehand sketch a boundary, and then crop everything else according to the new boundary.

3a. First go to Layer > New > New Shapefile Layer. This will be our temporary sketch layer and can be trashed afterward. Make sure in the Type option select Polygon and the CRS is same as the project.

topo_tut_35

topo_tut_36

3b. Back in the layer window you’ll see a new layer. Highlight the layer and click on the Pen icon in you menu bar to make the layer editable. And then click on the Add Feature icon to start drawing your new boundary.

topo_tut_37

topo_tut_38

topo_tut_39

3c. Now that you have sketched out your new boundary, you can use QGIS to generate a more proper rectangular boundary. Vector > Research Tools > Polygon from Layer Extents, and you should see a rectangular shape added to your layer.

topo_tut_42

3d. Next up we want to use the newly created boundary to crop all the data, the NED, sat img, and OSM. First we’ll deal with the sat img. Since they’re in small tiles, some of them intersect the new boundary, some of them don’t, so I want to merge them all into 1 image so I only need to crop once. To do that we go to Raster > Misc > Merge. Since the files are in different folders, put them all inside another folder and check Choose input directory instead of files, and check Recurse subdirectories, QGIS will cycle through all the folders and find all the .geotiff files and tile them. You should now see a new merged image and may remove the individual tiles.

topo_tut_43

topo_tut_44

topo_tut_45

3e. To crop the image, goto Raster > Extraction > Clipper. Give it a new name and save format as GEOTIFF and check Mask layer, and select the rectangular boundary. Do the same with the NED layer, once again save as geotiff. Once its done, change the color mapping to Pseudocolor, then turn on the transparency of the sat img, then you should get something like this.

topo_tut_46

topo_tut_47

topo_tut_49

3f. Cropping Vector data is similar, go to Vector > Geoprocessing Tools > Clipper, Choose the layer you want to be clipped, and the boundary to use. Do that with both the line and polygon layer.

topo_tut_50

topo_tut_51

topo_tut_52

4. Generating Mesh
Now we have processed all the data, we’re ready to take them outside of QGIS and into other design environments. We’ll try to generate a mesh out of the NED data. UPDATED – previous version of this lesson uses Rhino Grasshopper as a platform. It works but it’s highly limited due to the memory consumption issue of GH. This has been re-written in Rhino Python and is now a much more robust engine and it’s cross platform. Generating a 3 million polygon terrain took about 60 secs and 5 million one in about 90 secs. About 50,000 polygons per second on a 4 year old laptop. Not too bad.

4a. First we’ll convert the NED file. Raster > Conversion > Translate, we’ll be exporting the NED data into Arc/Info ASCII Grid. As the name suggest, this will convert the data to a delimited text file, which can then be read by many other platform easily. Once you’ve created the .asc file, open it with a text editor such as SubEtaEdit or NotePad++. The only thing you have to look out for if there is a No Data Value assigned. A no data value is only assigned if there is a mistake in the cropping process, meaning cropping outside of the image into an “empty” pixel zone. You will get an error message mentioning No Data Value, either delete that line in the .asc file, or regenerate the .asc file with a proper crop boundary.

topo_tut_53

topo_tut_55

topo_tut_54

4b.UPDATED Run the Rhino Python script. You will be prompted immediately to open another file, this is where you point to the Arc/Grid .asc file. The script will do the rest.

download

#-----------------------------------------
# Script written by Ted Ngai 4/26/2014
# copyright Environmental Parametrics, a research entity of atelier nGai
# This Rhinoscript automates the process of generating digital terrain
# model. File must be converted to .asc Arc/Grid ASCII format with 
# QGIS or GDAL library. 

# You should always reference the original metadata to determine
# the unit and scale of the file. And although z scale is always in meters
# depending on the projection system used, your x-y scale might be way off
#-----------------------------------------

import math
import time
import rhinoscriptsyntax as rs

#timer object
t1 = time.time()

#open and read the Arc/Grid file
fname = rs.OpenFileName("Open", "Arc/Grid ASCII Files (*.asc) |*.asc||")
f = open(fname)
lines = f.readlines()
f.close()

# reading meta data from file 
[n,ncol]=lines[0].split()
[n,nrow]=lines[1].split()

ncol = int(ncol)
nrow = int(nrow)

[n,xmin]=lines[2].split()
[n,ymin]=lines[3].split()
[n,cellsize]=lines[4].split()
dx = cellsize
dy = cellsize
s = 5

[n,ndat]=lines[5].split()
if n== 'NODATA_value':
    s = 6

# check for distortion from certain projection systems
if n != 'cellsize':
    [n,dx]=lines[4].split()
    [n,dy]=lines[5].split()
    s = 6

dx = float(dx)
dy = float(dy)

# for certain projection system, this would scale the x-y to appropriate size
if dx < 1:
    #calculate cellsize in meters
    r = 6378.137
    lat1 = float(ymin) * math.pi/180
    lat2 = (float(ymin) + float(dy)) * math.pi/180
    lon1 = float(xmin) * math.pi/180
    lon2 = (float(xmin) + float(dx)) * math.pi/180

    d = math.acos(math.sin(lat1)*math.sin(lat2)+math.cos(lat1)*math.cos(lat2)*math.cos(lon2-lon1))*r*1000
    theta = math.atan(float(dy)/float(dx))# * 180 / math.pi

    dx = math.cos(theta) * d
    dy = math.sin(theta) * d

#read heightfield data
z = []
for s in xrange (s, len(lines)):
    z.extend (lines[s].split())

#generate x and y range
x = []
y = []

for v in range(0,nrow):
    for u in xrange(0,ncol):
        x.append(u*dx)
        y.append(v*dy)

# generate face vertices
face = []
for n in range(0,(nrow-1)*(ncol)):
    if n%(ncol) != 0:
        face.append((n-1,n,n+ncol,n-1+ncol))

# generate vertex coordinates
vertices = []
for n in range(0,len(z)):
    vertices.append((x[n],y[n]*-1,float(z[n])))

#make mesh in rhino
rs.AddMesh(vertices,face)

#timer object
t2 = time.time() - t1
timer = ('elapsed time : '+ str(t2) + ' seconds')

rs.MessageBox(timer, buttons=0, title=None)

 

topo_tut_69

4d. Next we'll need to check the scale of this topography. The vertical values are good because they came straight from the data file, the length and width, however, is generated arbitrarily so we need to go back to QGIS to bring more information in. Fortunately we already have the site boundary layer, we just need to save that file out as .dxf and bring into Rhino3D. So back in QGIS, right click on the boundary layer and click on Save As. Make sure the CRS is Project CXRS and format is Autocad DXF. Now import to Rhino and you should see the rectangle in the middle of no where, zoom all and move it back to the origin. Now use Scale 2D to scale the mesh to match the rectangle.

topo_tut_70a

topo_tut_71

topo_tut_72

5. Image Processing
Now we have the mesh, it's time to go back to the images and get more out of them. First we'll take the NED data and bring it in Photoshop purely for presentation purpose. Then we'll go back to the sat img to look at other "bands" of data that is embedded within the file.

5a. First if you try to open the NED file (in GEOTIFF format), you get something like this, a white image. And just like what happened in QGIS, the data is hidden inside and we just need to bring it out.

topo_tut_59

5b. Notice on the file tab after the file name it says (Gray / 32), this geotiff file is a 32-bit grayscale image, and needs to be converted for photoshop. PS is not yet friendly to 32-bit formats, although it's getting increasingly popular in HDR imaging. So we need to reduce to 16-bit. Image > Mode > 16-bit. It'll then open up HDR Toning window, use Equalize Histogram. You should be able to see you NED image now.

topo_tut_61

topo_tut_62

5c. You want to change from Grayscale to RGB Color.

topo_tut_64

5d. Next is to add a Gradient Map, Image > Adjustments > Gradient Map

topo_tut_63

5e. Now choose your favorite gradient mapping or create your own. You should see the image changing in response to the colormap. Use it to reveal details, and yes at this point your data is no longer scientific, but purely for visual purpose. And you should get something like this at the end.

topo_tut_65

topo_tut_66

centralia_NED19_clipped_gradient

6. Image Processing Cont.
For this part we need to use MultiSpec to process the satellite imagery. As you may remember, this set of image contains 4 bands of data. Other common images such as Landsat 7 would contain 7 bands of data. It means that satellite used to capture these images has multiple sensors capturing different wavelengths. We humans only see RGB but for scientific research other wavelength can begin to tell us things we can't perceive visually. The NAIP satellite image contains the typical RGB band and NIR band (near infrared). If you were to just try to open this up in photoshop you'll get a faint image. It's because photoshop is not made for research, so we once again need to process the data.

6a. The software is very straight forward. Open Image, and a dialog box comes up. Change these settings, Magnification - 0, Stretch Linear, Min/Max: Clip 0% of Tails, Treat 0's as Black. Click ok and you should see something like this, a nice high res satellite image.

topo_tut_73

topo_tut_75

topo_tut_76

6b. Now Open Image again and change these settings while keeping the rest the same. Red - 4, Green - 1, Blue - 2. You're now creating what is called NRG image. The 4th Band of the satellite image is the Near Infra Red radiation that is reflected off vegetations. You now place that in the Red channel of the RGB image so the reflected NIR radiation shows up as the red component, a way researchers use to identify vegetation.

topo_tut_77a

topo_tut_78

Finally, all the images generated can be used for texture mapping as well.

topo_tut_79

]]>
http://www.tedngai.net/?feed=rss2&p=872 0
re-envisioning the hyde at rensselaer http://www.tedngai.net/?p=852 http://www.tedngai.net/?p=852#comments Mon, 20 Feb 2012 14:09:07 +0000 ted http://www.tedngai.net/?p=852

]]>
http://www.tedngai.net/?feed=rss2&p=852 0
urban metabolic morphologies http://www.tedngai.net/?p=801 http://www.tedngai.net/?p=801#comments Mon, 20 Feb 2012 13:04:26 +0000 ted http://www.tedngai.net/?p=801 A collection of urban morphology visualizations based on NASA JPL’s Shuttle Radar Topography Mission (SRTM) data. It is part of a series of visualizations created for a research I’m working on – urban metabolic morphologies. Data processing done through Matlab, vector data is from the Openstreetmaps project, georeferencing done through QGIS.

This set of images compares the intricate relationship between topography, hydrography, and urban form. 12 of the world’s fastest growing cities were studied to see how urban areas grow into their surrounding landscapes.

The images shown here are not to scale so the cities should not be compared with one another.

New York City, New York
SRTM layer

New York City, New York
SRTM + Openstreetmap layer

New York City, New York
Slope layer

New York City, New York
Flow Accumulation layer

New York City, New York
Wetness Index layer

Tokyo, Japan
SRTM layer

Tokyo, Japan
SRTM + Openstreetmap layer

Tokyo, Japan
Slope layer

Tokyo, Japan
Flow Accumulation layer

Tokyo, Japan
Wetness Index layer

Mexico City, Mexico
SRTM layer

Mexico City, Mexico
SRTM + Openstreetmap layer

Mexico City, Mexico
Slope layer

Mexico City, Mexico
Flow Accumulation layer

Mexico City, Mexico
Wetness Index layer

Seoul, Korea
SRTM layer

Seoul, Korea
SRTM + Openstreetmap layer

Seoul, Korea
Slope layer

Seoul, Korea
Flow Accumulation layer

Seoul, Korea
Wetness Index layer

Jakarta, Indonesia
SRTM layer

Jakarta, Indonesia
SRTM + Openstreetmap layer

Jakarta, Indonesia
Slope layer

Jakarta, Indonesia
Flow Accumulation layer

Jakarta, Indonesia
Wetness Index layer

Sao Paulo, Brazil
SRTM layer

Sao Paulo, Brazil
SRTM + Openstreetmap layer

Sao Paulo, Brazil
Slope layer

Sao Paulo, Brazil
Flow Accumulation layer

Sao Paulo, Brazil
Wetness Index layer

Rio de Janeiro, Brazil
SRTM layer

Rio de Janeiro, Brazil
SRTM + Openstreetmap layer

Rio de Janeiro, Brazil
Slope layer

Rio de Janeiro, Brazil
Flow Accumulation layer

Rio de Janeiro, Brazil
Wetness Index layer

Hanoi, Vietnam
SRTM layer

Hanoi, Vietnam
SRTM + Openstreetmap layer

Hanoi, Vietnam
Slope layer

Hanoi, Vietnam
Flow Accumulation layer

Hanoi, Vietnam
Wetness Index layer

Nairobi, Kenya
SRTM layer

Nairobi, Kenya
SRTM + Openstreetmap layer

Nairobi, Kenya
Slope layer

Nairobi, Kenya
Flow Accumulation layer

Nairobi, Kenya
Wetness Index layer

Lagos, Nigeria
SRTM layer

Lagos, Nigeria
SRTM + Openstreetmap layer

Lagos, Nigeria
Slope layer

Lagos, Nigeria
Flow Accumulation layer

Lagos, Nigeria
Wetness Index layer

Dhaka, Bangladesh
SRTM layer

Dhaka, Bangladesh
SRTM + Openstreetmap layer

Dhaka, Bangladesh
Slope layer

Dhaka, Bangladesh
Flow Accumulation layer

Dhaka, Bangladesh
Wetness Index layer

Shanghai, China
SRTM layer

Shanghai, China
SRTM + Openstreetmap layer

Slope layer

Flow Accumulation layer

Wetness Index layer

Taipei, Taiwan
SRTM layer

Taipei, Taiwan
SRTM + Openstreetmap layer

Taipei, Taiwan
Slope layer

Taipei, Taiwan
Flow Accumulation layer

Taipei, Taiwan
Wetness Index layer

]]>
http://www.tedngai.net/?feed=rss2&p=801 0
endothermic morphologies http://www.tedngai.net/?p=786 http://www.tedngai.net/?p=786#comments Fri, 02 Dec 2011 01:55:06 +0000 ted http://www.tedngai.net/?p=786 Thermoregulation is the ability of an organism to keep its body temperature within certain boundaries, try even when the surrounding temperature is very different. This process is one aspect of homeostasis: a dynamic state of stability between an animal’s internal environment and its external environment.
Organisms can generally be divided into two types of thermoregulators, endotherms and ectotherms.

Endotherms create most of their heat via metabolic processes, and are colloquially referred to as warm-blooded. Ectotherms temperature comes mostly from the environment.

This set of experiments attempt to combine thermal-electric materials and heat pipe technology to actively thermoregulate an environment.








]]>
http://www.tedngai.net/?feed=rss2&p=786 0
AMPS Research http://www.tedngai.net/?p=774 http://www.tedngai.net/?p=774#comments Thu, 03 Nov 2011 12:55:58 +0000 ted http://www.tedngai.net/?p=774 Research and design of the Active Modular Phytoremediation system. System geometry is based on the I-WP Triply Periodic Minimal surface, prescription thumb one of the 13 of its kind technically described by Alan Schoen in his 1970 paper. The geometry has many properties that makes it ideal for creating a modular wall system that is also a plenum as well as housing a large number of integrated hydroponic and electronic equipments.

]]>
http://www.tedngai.net/?feed=rss2&p=774 0
psychrometrics with Matlab http://www.tedngai.net/?p=738 http://www.tedngai.net/?p=738#comments Sun, 27 Feb 2011 21:08:36 +0000 ted http://www.tedngai.net/?p=738 This is the second tutorial on climate visualization using Matlab, particularly on plotting weather data on a psychrometric chart.

Although architects are most familiar with psychrometric chart to understand comfort zone and identify appropriate passive design strategies, psychrometrics is more commonly used by mechanical engineers to study air and water vapor mixture for active mechanical devices to condition air.

Tools like Autodesk’s Ecotect Weather Tool or Climate Consultant are typically sufficient for architects to study comfort zone and passive design strategies. However, for active applications, we need to be able to enter our own data points, such as ones that come from mechanical specs or material properties.

4 scripts 2 data files are included in this distribution.

  • Make sure all 6 files are mapped into your Matlab path. Easiest way is to put all the files into your My Documents\Matlab folder, or on OSX > Documents\Matlab
  • Open ReadEPW.m  This is an updated version and does not require you to manually input the file path.
  • Click Run and locate the desired Energy+ Weather file
  • Open EPW2Hourly.m and click Run
  • Type in command:
    psychro(dbt,ahm,0,0,’scatter’);
    This will create a scatter plot of all the hourly values of dry bulb temperature and absolute humidity.
  • To separate all the data by months, use the command
    psychro(dbt(aug),ahm(aug ),0,0,’scatter’);

    psychro(dbt(apr),ahm(apr),0,0,’scatter’);
  • Sometimes having all the hourly data is not very useful because there’s simply too much information. Run the EPW2Daily.m file, this will reshape the 1×8760 matrix to 24×365, and we can easily find the daily maximum and minimum, and plot the diurnal changes.
  • Type in command:
    psychro(dbtMax,dbtMin,ahmMax,ahmMin,’line’);
    This will give you a line plot of all the diurnal changes. The line will start at the higher temperature and end at the lower temperature.
  • Once again, we can separate it into individual months with the following command
    psychro(dbtMax(aug),dbtMin(aug),ahmMax(aug),ahmMin(aug),’line’);
  • psychro(dbtMax(apr),dbtMin(apr),ahmMax(apr),ahmMin(apr),’line’);

If you get an error like this one

??? Error using ==> fgetl at 44
Invalid file identifier.  Use fopen to generate a valid file
identifier.
Error in ==> ReadEPW at 16
loc = fgetl(fid);

That means Matlab is not recognizing the folder of your EPW file. Go to File > Set Path and locate the folder.

To set a different color for each month and automatically save the charts, you can do the following.

%written and copyrighted by Ted Ngai www.tedngai.net
%version 0.1 Mar 2011

xSize = 6; %This is the horizontal size of the figure
ySize = 4; %This is the vertical size of the figure
months = {1:31, 32:59, 60:90, 91:120, 121:151, 152:181, 182:212, 213:243, 244:273, 274:304, 305:334, 335:365};
monthstr = {'jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov', 'dec'};
figure;
c=0;
for x = 1:12

%get current month
month = cell2mat(months(x))';

%get min/max of month
maxmon=max(month);
minmon=min(month);

%set color value
nColor=[1,0.8-c,0.1+c];
c=c+0.07;

%plot graph
psychro(dbtMax(month),dbtMin(month),ahmMax(month),ahmMin(month),'line',nColor);

xlabel(cell2mat(monthstr(x))); %.. and label for the y axis
%ylabel('temperature (\circC)'); %Input the labels for the x axis

currentFig = gcf;
currentAxes = gca;

set(currentAxes,'FontSize',6);
set(currentFig, 'PaperUnits', 'inches'); %change paperspace units here (also line 25)
set(currentFig, 'PaperSize', [xSize ySize]);
set(currentFig, 'PaperPositionMode', 'manual');
set(currentFig, 'PaperPosition', [0 0 xSize ySize]);
set(currentFig, 'Units', 'inches'); %change paperspace units here too
set(currentFig, 'Position', [0, 0, xSize, ySize]);

axis([minmon maxmon 0 120]);

fn=['psychro_daily_',city,num2str(x),'.ai'];
hgexport(currentFig,fn); %Change the filename here

end
clear xSize ySize currentFig currentAxes fn temp maxmon minmon month ;

]]>
http://www.tedngai.net/?feed=rss2&p=738 3
AMPS Mockup http://www.tedngai.net/?p=721 http://www.tedngai.net/?p=721#comments Mon, 21 Feb 2011 04:31:39 +0000 ted http://www.tedngai.net/?p=721 Full scale mockup of the Active Modular Phytoremediation System. This is both a visual mockup and a testing prototype we will use to collect data. It’ll be installed inside SOM’s office where we will track the rate of VOC removal, plants’ behavior and lighting requirements, and the automation of lighting, moisture sensing and irrigation systems.

]]>
http://www.tedngai.net/?feed=rss2&p=721 3
Ecophysiological Architecture http://www.tedngai.net/?p=671 http://www.tedngai.net/?p=671#comments Mon, 21 Feb 2011 03:57:01 +0000 ted http://www.tedngai.net/?p=671 Through evolution, animals and plants develop strategies to survive and thrive in their own environments. Survival mechanisms such as thermoregulation, water economy, and energy metabolism, are common to all organisms. Physiology, since the famed French scientist Claude Bernard, has become the study of how such mechanisms manifest under isolated and controlled conditions. Ecophysiology, on the other hand, pursues the studies in the subject’s own environment and allow for a much more natural observation and analysis of the organisms’ responses to the dynamics of resource availability and diurnal and seasonal switches. Architecture, often conceptualized as the third skin, can learn much from such physiological approach, particularly in the face of global climate change and energy crisis. Unlike conventional architectural practice, organisms do not deal with heliotropism separately from thermoregulation, nor would do they handle water economy separately from energy conservation and metabolism. To maintain a stable internal environment, organisms must rely on everything at their disposal as part of their survival strategy. Therefore, ecophysiological architecture posits the same fundamental basis and asks a simple question: “what if buildings have to develop heating, cooling, lighting, daylighting, and ventilation strategies as part of its morphology?”

The following are selected works from a studio investigation that attempts to reapply these highly coupled physiological systems with the building’s mechanical and morphological system to imbue a behavioral solution to address the ecology of our built environment.

Projects

Joe Hines | Active Thermoregulation









Shima Miabadi | Counter Current Heat Exchange








Nicholas Stipinovich | Wind Responsive Envelope










Lauren Thomsen | A Photoperiodic Envelope







Adam Petela | Adaptive Networking








]]>
http://www.tedngai.net/?feed=rss2&p=671 1