Thu 15 May 2008
Where 2.0: Lifemapper 2.0: Using and Creating Geospatial Data and Open Source Tools for the Biological CommunityPosted by john under Events , Geo Stuff
I work at the biodiversity institute at the university of Canvas, have background in geography, GIS, remote sensing, computing. Most recently worked on “lifemapper” – creating archive of ecological niche models. Maps of where species might occur based on where we know they’ve been collected by scientists. Also creating web services that expose the archive.
Showing what this looks like. Showing google earth. Showing specimen locations of a plant. Red parts are where we expect species to occur, yellow where we’ve found it, using ecological niche models. Can look and see that these came from Burke museum at Washington. Goals are archive of niche models, predictions.
Spatial data in OGC web formats, WFS coming soon, WCS for raster data, WMS too. No query analysis yet but coming in next month or so. Landscape metrics, fragmentation of habitat, comparison of habitats of different species, predicted future climates…
Also have on-demand models. Niche-modelling on a web-service on a 64 node cluster. Anyone can use this, our archive has limitations, no significant quality control, assume it’s been done by museums, but could be more done really. On-demand service can be used by researchers with their own data perhaps at higher resolution.
Niche modelling becoming more popular because more specimens becoming available. Environmental and occurrence data, put into model and it calculates formula, also project onto a map to help visualise.
Data is more available as there’s a massive effort to database and geotag it. Might be paper catalogs as that’s how it’s been collected for 300 years, now putting into databases to digitise collections. Also exposing data via the internet using standard protocols. Slide shows examples of 3 collection that when put together give more powerful collection.
Several scenarios on economic development, regionalisation, environmental aspects, modelled by nobel prize winners with Al Gore. We use multiple scenarios and compare them “apples to apples”.
Use this distribution data, put together with ecological variables through a modelling algorithm to get ecological niche model of where the species would occur. Using 15-20 variables. Model is output through projection tool to project is onto a map.
Specimen point data is taken to create a model using an algorithm of the current environment, projected back to get distribution in the native region. Done with climate models get distribution after climate change. Significance is looking at non-native region can see what areas might be vulnerable to invasion by species after climate change.
Archive created with pipeline, constructs request, passed to cluster on a 64-node cluster, web services in front, nodes retrieve environmental data, using WCS, node dispatches to openModeller and pipeline polls for status and retrieves data and catalogs.
Exposing data on website but also exposed on web services, can see in google earth.
Showing samples that we have the data in the database but don’t have lat/lon, have around 80% of those. ~220 institutions are providing data, within those about 600 collections, fish, mammals, etc.
Other ways to access data is to request data, experiment consists of all the data and the models and maps produced on top. Just URLs, can be accessed programmatically.
Other thing is “submit an experiment”, constructing search parameters and get a URL back with information for this experiment. Get really basic data back for it, shown projected on 4 different climate scenarios, current and 3 future ones. Showing metadata for the collection and other properties.
* Distributed computing
* Screen savers
* Competitive like SETI
* Captured wrong audience
People weren’t really interested in the topic, couldn’t handle the demands of the audience.
* Funded, cluster computing, OSS, standards