-
Code Monkey for Hire
A few weeks ago I handed my notice in to Multimap. My last day will be Friday 18th July. While I’ve really enjoyed my time here I’ve decided that I want to branch out and try some new things. I’ve had the privilege of designing and building one of the most important products that Multimap has delivered over the past 3 years - the Multimap JavaScript API. Specialising in this way has been a great experience but I’m intending to broaden my horizons to cover more back-end technologies, which is where most of my previous experience has been.
My intention is to start doing freelance work once I have completed my notice period with Multimap. I have arranged work with some good friends that should keep me pretty busy for the first few months after leaving but a large part of what I’m looking for in the future is variety so I will be very interested to hear from other friends who have interesting projects that they might like my help with.
Due to contractual obligations I may be looking to minimise the amount of geowanking I do for the first few months. Fortunately I do have five years experience in PHP, three years experience of intensive JavaScript coding and various bits of experience in Ruby on Rails and even a little Python. I also have good experience in setting up Linux-based web and database servers. I’ve been using and contributing to the interweb for something like 14 years, so yes, I do remember Netscape 1.0 (background images!), the launch of Yahoo! and the BBC’s wonderful “list of interesting web pages”. I was also putting music online when Shawn Fanning was probably sleeping peacefully in his dorm room, and I was doing it legally (I think)!
If you are interested in hiring me then you might like to take a look at my CV which you can find here (yes I know I need to replace the dodgy matrix). If you want to get in touch then you could try contacting me on my linkedin.com page, or just email me at my first name @ my surname dot net. If you’re having trouble getting me then drop a comment on this page and I’ll get in touch with you. I’m not looking to hear about a thousand unsuitable posts from agencies but if someone from an agency has something flexible to offer that’s relevant and can fit in with my other obligations then that will be fine.
I will still be attending the State of the Map conference this weekend. I’m really looking forward to it and it promises to be a lot of fun. I will also be representing Multimap for the last time while giving a presentation on “Using Crowd Sourced Data in a Commercial API” which I’m hoping will be interesting enough for those that attend. If the conference is anything like last year though there will be plenty of good presentations to contend with.
Technorati tags: resignation, multimap, microsoft
-
Up all night, Mashed
So I’ve now been up basically all night (5:14am, no intention to go to bed). Unfortunately I haven’t spent the whole night hacking, in fact the hacking task I was working on - getting the search API into the ruby library - was completed hours ago. I’ve actually spent the last 5 hours or so playing Rock Band.
We started playing cooperatively with two guitars and a drum set until around 3am when we got complaints about the noise of the drums (even though they’re not real, whacking some bits of plastic with real drum sticks makes quite a lot of noise!) Funnily enough though, the complaints came not from the sleepers but from the people playing Werewolf in a big group!
But back to the search API. I’ve added three new classes to the mmruby library: MMSearch, MMSearchFilter and MMSearchRequester. To perform a search you need to create a MMSearch object and then pass it to the MMSearchRequester class’s static
search
method:s = MMSearch.new( { :data_source => 'mm.poi.global.general.atm', :address => MMAddress.new( { :qs => "L19 4UD" } ) } ) r = MMSearchRequester.search(s)
I have realised that there’s actually the routing API that also needs doing. It’s unlikely that I’ll get around to doing that today but I’m sure I’ll do it before long. I’ll post again once that’s done.
Again, not thoroughly tested just hacked together, any comments appreciated. Download mmruby-0.2 here.
Technorati tags: ruby, mashed08, multimap, web, service, REST
-
A weekend getting Mashed
I’m spending this weekend at Alexandra Palace at the Mashed event that’s being arranged by the BBC. I went to the similar “Hack Day” event that they ran last year (and blogged about it) and had a great time. This time I’ll actually be there in a vaguely official capacity as Multimap and Microsoft are supporting the event. Multimap have already blogged about it but I thought I’d write a few lines too and release some software that might come in handy.
One thing that we’re really proud of at Multimap is our RESTful web service APIs. These give you access to geocoding and (faux) reverse geocoding services as well as access to our great search APIs.
As part of a project I’ve been working on in my spare time I’ve put together some ruby wrappers for these APIs so I thought I’d get these released in case they come in handy for anyone during Mashed. I’m not an experienced Ruby programmer so I can’t give any assurances of the quality but hopefully they’ll come in handy. You’ll need to have the ‘json’ library installed as well as Net::HTTP. I’ve only used it in a rails setting by placing the .rb files in the
lib
directory so your mileage may vary. I’d really appreciate comments, bug fixes and anything more you’d like to add. The libraries give you access to geocoding, national grid conversion together with a few handy geo-related classes (lat/lon, bounding box class).Unfortunately I haven’t got around to wrapping the search API but it shouldn’t be too hard to do, I might even do it as a hack during Mashed. I might see if anyone’s added any comments before deciding whether to do it though :D
So download mmruby 0.1 here and tell me what you think of it below!
Technorati tags: ruby, mashed08, multimap, web, service, REST
-
WhereCamp: Second day of notes
Just published my roundup of the first day. Found it pretty difficult as I hadn’t taken any notes yesterday so I’ll try taking some today I think. I’m going to try something a little different today. I’ll publish this article in the morning then keep typing in it during the day, unfortunately WordPress doesn’t seem to be autosaving but I’ll click “Save” from time to time. I’ll also put something at the end of the article to mark it finished.
Micro, Nano, Pico formats Talking about marking up the various formats with geo information: KML, GeoRSS, Atom. Formats are too big, they need to be smaller. Need to be able to store multiple locations and time information potentially. Discussing the best ways of linking to multiple locations, whether to place them as separate entries within Atom (which in old viewers will show the locations as separate entries), also whether we should insert links within text, or have separate links. Do those links link within a file, or to separate files? Probably the best place to look for updates on this part of the session is Andrew Turner’s blog.
Geohash from geohash.org, simple way of giving bounding boxes or lat/lon coordinates. Easy to search within a bounding box though will have issues as it’s based on predefined tiles - actually apparently not, this does not use tiling.
Guy from xrosspath is talking about “Geotudes” but they do seem rather similar to basic tiling mechanisms, quad-tree indexing and morton numbers.
Is geo privacy shit? Mapufacture has enabled fire eagle to generate an RSS/KML feed. Andrew Turner thinks lots of people will decide “screw privacy”, Fire Eagle think that’s fine, so long as it’s a choice.
Keys to allow specific people to get access.
Privacy issues stifle innovation, caused issues with Dopplr. Can we trust the government? Being near a criminal offence might cause you to become a suspect (or witness). Location black lists. Nathan Eagle relationship mining - ContextAware. Regulatory frameworks. Infer location through friends. Went through the six de bono hats method. Here’s the notes that Andrew Turner took during the discussion.
Lightning Talks
- Platial - some maps that people have done.
- State of the Map
- Earthscape
- www.gotalift.com
- Geocoding news (at the source)
- NNDB Mapper
- Fixables
- Home-brew 360-degree display
- Quantum Nanotechnocracy?
- Neigborhood map project
- Image recognition game - Imagewiki
- Abaqus
Winners of the talks were: 8, 6, 1 (in that order)
Google AppEngine GeoStore This was mainly a roundup of the geo features available within the Google AppEngine.
Are Google and Microsoft killing the ecosystem? Much discussion on the topic, are Google et al killing the “mom and pop” size businesses? They are getting so big that noone can even start to compete with them because you can’t get near them. It was mentioned though that there wouldn’t have been the innovation there was so far without Google releasing the free maps API in the first place. There was also a small discussion at the end about whether Google could release data - India data and map edits - that they have collected themselves. Unfortunately not much information was known about those but the Googlers did seem open to doing something.
Well that’s the whole thing over, it was a great conference, I’m really glad I made it over. Some interesting discussions. As I mentioned above I kinda launched something, I’m going to get the tidying up sorted then post something further on that later.
-
WhereCamp: Not Live Blogging
So we’re now halfway through WhereCamp. I decided not to live blog the sessions as there was just too much discussion going on to keep up, and I wanted to be involved with these discussions which was a little hard if I was taking notes all the time. I’ll give a quick roundup of the ones I went to here but for more notes (and some alternative choices) you can check out my colleague’s blog here.
4D Mapping This was a pretty interesting look at the ways of representing time-based map data. We discussed the use of sliders for filtering visible data, spirals for representing the passage of time and various other methods. Take a look at the wiki page I’ve linked to for more detail.
Mapmaking and Visualization with Processing This session covered various visualizations that have been done using the Processing graphics toolkit. A few examples were shown together with a brief look at the source code, websites such as Cabspotting, Hackety and Obsessing.
Cyclone Nargis I actually ended up in this talk by accident, I got lost on the way to “Kiev”. It was, nonetheless, very interesting to hear about the ways in which Burmese volunteers are trying to get aid to the people in Burma. A number of resources were also mentioned including Sahana Disaster Management System and Myanmar Cyclone Relief Donations.
Is 3D Shit Steve Coast requested this session to discuss whether all the money being put into 3D virtualization was really worth it. Discussion from a number of interested parties including people from Google and Planet 9. We basically decided that the people involved were generally investing for the future.
Location Tracking The location tracking session was actually organized by me, I’ve been tracking myself at various points for the past two years and have been building a site on-and-off for the past year so I was hoping to get together with some other people doing similar things. We managed to get people from Fire Eagle, Loki and iPoki. We had a good discussion about privacy, about ways to track and about requirements for accuracy but didn’t really come out with too many conclusions. I also kinda launched the site that I’ve been working on but it still needs some more work so I’ll do another blog post on that when I get a little more time.
Xrosspath Xrosspath is a new site that is intending to take location history for a number of users and compare them to find points in time that they have crossed paths. They’re also looking to link in world events and historical events to see what’s happened around you during your life. It was interesting how this session had links to earlier sessions including the 4D mapping and the location tracking sessions. It did seem like they should be linking into what other people are doing in location tracking. I definitely think their premise is valid though and it will be interesting to see what comes out of that.
I wussed out and didn’t stay overnight, I also didn’t get around to doing much hacking. After a couple of beers I was pretty tired. I did end up getting to bed earlier than I’ve managed for the whole trip and got a great night’s sleep. Hopefully that means I’ll be all ready to get hacking today!
-
Where 2.0: Activist Mapping
Erik Hersman
Talking about real world uses of our work, in terms of issues.
Grew up in Sudan, Kenya, now write two blogs “White african” “afrigadget”.
Excited about geotate.com very light weight, good in the field with unskilled people.
Also AfricaMap, very exciting as geolocation data was hard to find in Kenya.
Buglabs, any time you can hack hardware and software is seriously good. As more people use it, it’ll get cheaper and might get to 3rd world.
DIY Drones - great for crisis scenarios, how normal citizens can use it to help out
Illustrating that tools that we make are being used by people, perhaps with different background. Interesting over the coming years as GIS tools become easier to use, similar to how we see in CMS areas. We’re going to see something really big happen in the next few years.
Kenya Elections 2007 - issues and irregularities, opposition leader didn’t agree with the outcome. What started as apolitical fracas devolved into ethnic one. Showing slides of maps of polarities. People were kicked out of differing descents.
At same time, there was a media blackout, difficult as no way to get information out. Only way to get messages was SMS and phone calls, outside could only see through blogs and social media. Harvard law grad’s blog in Kenya got information out on her blog - kenyanpundit.com
“Our Goals”
- way for everyday kenyans to report incidents of violence
- create an archive of news and reports
- show here majority of violence was happening
“Building It”
- Detailed geospatial data is hard to come by in Africa
- How much should it be web-based in a mobile phone culture?
- Mobile phones - getting a full report in 140 characters is not easy
- What data points do we need?
We’re not part of humanitarian industry so don’t know what’s needed.
[Calendar of events] Dec 27 elections Dec 27 - 30 - Period of uncertainty Dec 30 - Jan 1 - Media blackout. Launched Ushahidi.com by end of January.
[Demo]
Only took a few days but really worked. Allowed us to do something not everyone could do. Timeline of events, see events occurring in the field. Draggable timeline and events on updates.
Had the beginnings of a crowdsource crisis information, realised this was pretty new, we were knew so hadn’t realised. So now decided what we do next.
Lessons Learned
- The importance of mapping accuracy
- Data poisoning - what happens when your antagonist starts using is?
- Verification is difficult
- Clarify why it was created and make that inescapably obvious - this was for rough data, not for ICC
- Create a feedback loop to end user
“So, did it work?”
- Advocacy? - Yes, mainstream media was affected and brought attention to the situation
- Security? Probably not
- Monitoring? Probably not
- Information Gathering? Yes, pretty well.
Formed by Erik and four other Kenyans, also funded now.
Types of activist, beer activist, anarchics, passionate about illegal immigrants, passionate about immigrants.
Activism always has two sides, both sides are passionate.
“It turns out activists are just everyday people, most with limited technical acumen.”
Going to go through several activist sites.
Crisis in Darfur google earth application, US holocause museum and amnesty international.
Sudan ICC war criminals website, warrants out for their arrests
Access Denied map from global voices, online censorship maps in closed countries.
“Tunisian Prison Map” - applying transparency to prisons in tunisia.
Bahrain land rights, showing difference between quality of life between haves and have nots.
Operation Murambatsvina - zimbabwe - showing land distribution, very heavy handed.
Mapping election conditions in Zimbabwe - taking news data about heavy handed government acts against normal civilians well before the election time. Showing that there was a track record well before the election happened.
The Great Whale Trail Map
Planet action - different environmental causes around the world
“I love mountains” - US based, enter zip code or state or city and see how you’re connected to mountaintop removal.
Mapping for human rights violations vs Mapping for activism - two separate things.
First is GIS/neogeo - this is what happened, used for taking criminals to court, second is to create awareness of buy-in of an issue
“Think about how you can use your skills to help in a cause that is important to you”
We have the ability to affect issues miles away which we couldn’t do not long ago.
-
Where 2.0: A Data Source to Make Mashups Correct, Complete, Relevant and Revisited
Jonathan Lowe
Several companies have noticed the value in collaboration, OSM, geocommons, google base, others..
Structured databases means they can collaborate world
Basics of freebase
Web based database of community entered data managed by metawhere.
Freebases data includes but is not exclusively spatial.
Have seen three glue domains, business, people, location.
Began with points, 262,000 locations, exposing an API this summer. Mashups will be able to query freebase for spatial data in multiple formats as well as the other data.
Examples on openlayers, google maps, others.
“What is semantically structured data?”
- Strongly typed data
- Hardwired data relationships
Enter “olympic torch” into google, from results extract descriptive text, filter out the search term. Send results to engine that generates tag clouds, then show resulting tag cloud to people who don’t know the search term. Ask them to guess.
foundation cookbook bertolli shattuck grouemet california restaurant berkeley alice waters - What is Chez Panisse? Semantic meaning gives a lot of information
Brains are very good at semantics, which classify or type data, and form relationships.
What were brains doing? Noticing Alice and Waters were names, saw “co-founder” - that’s people. Maybe Alice Waters is a co-founder. Saw “restaurant”, that’s a business, started by a co-founder. Might have noticed references to place. Put them together and get restaurant in berkeley, california, co-founded by alice waters to get Chez Panisse.
How to put this into a database?
Entries have multiple types of properties, properties have relationships. Business entry has geospatial data, as does alice waters. Berkeley has its own properties, these properties relate to the chez panisse and alice waters.
If I went to Berkeley and want funding perhaps these relationships can help me, freebase can show these.
“Relevance by faceted browsers”
Search in results for references to people, then kinds of people.
A Data Source to Make Mashups Correct, Complete, Relevant and Revisited
Technorati tags: mashups, freebase, where, where2.0, where2008
-
Where 2.0: Google Maps for Mobile with My Location - Behind the Scenes
Adel Youssef
First, what is GMM? Maps for mobile phones.
“My Location” shows blue dot with big circle giving idea of how accurate your location is.
Not GPS accurate, but very useful.
Why is “My Location” useful?
GPS free service, Free! Saves battery. no problem with line-of-sight. Many applications benefit from thie accuracy. No waiting for first fix. Works across many carriers and network types.
- Collect geocontextual information along with a cell-id
- Cell Tower Identifier (cell-id)
- Location: GPS vs center of the map
Difficult to make it work across platforms and carriers, there is no unique ID across techs/carriers. How do we get location? If you have GPS cell phone, we collect that and the cell ID. We can also benefit from geo info like where you look at. Anonymous, just stores cell tower and location information, GPS or non-GPS.
We store this in our platform and run algorithms to figure out the location.
- 100s of different platforms - causes many issues
- Area of interest vs actual location
- Noisy data
-
- Oklahoma points
-
- GPS errors
- Towers in the water!
This approach can cause problems, if we’re all looking at SF maps the Burlingame cell tower will be identified as SF. Wherever you are, we often think you’re in Oklahoma because that’s the “center” of the US on google maps. GPS points can also be wrong if you have bad signal. Sometimes we find we have towers in the center of the ocean due to averaging, can be due to cell towers on oil platforms, which alters the accuracy. Sometimes that’s right.
Clustering Algorithm
- GPS Clustering vs non-GPS
- Use data diversity to calculat accuracy
Have invested more time in analysing the data. Data collection has been growing exponentially, working around the world, including the himalayas. Shows that this can work, non-GPS data is providing large amount of the data, more than GPS.
“Why doesn’t it work on my cellphone?”
We’re trying to get it working on as many as possible, some platforms don’t provide API to get cell ID. Some give part, so we do smart techniques to partly work it out. Some give multiple cell IDs. Others give full information.
Privacy
- A balance between respecting user privacy and providing good useful functionality to the user
- How does My Location do this?
-
- Anonymous: No PII, no session id
-
- User has full control, can disable or enable it.
What next?
- Improve accuracy and coverage
- Continue improving security
- Enabling location for 3rd parties via Android, Gears (browser)
Can use gears to enable it for your website or application. Build innovative location-based applications
www.google.com/gmm code.google.com/android code.google.com/apis/gears
Google Maps for Mobile with My Location - Behind the Scenes
Technorati tags: google, mobile, map, my-location, where, where2.0, where2008
-
Where 2.0: Lifemapper 2.0: Using and Creating Geospatial Data and Open Source Tools for the Biological Community
Aimee Stewart
I work at the biodiversity institute at the university of Canvas, have background in geography, GIS, remote sensing, computing. Most recently worked on “lifemapper” - creating archive of ecological niche models. Maps of where species might occur based on where we know they’ve been collected by scientists. Also creating web services that expose the archive.
Showing what this looks like. Showing google earth. Showing specimen locations of a plant. Red parts are where we expect species to occur, yellow where we’ve found it, using ecological niche models. Can look and see that these came from Burke museum at Washington. Goals are archive of niche models, predictions.
Spatial data in OGC web formats, WFS coming soon, WCS for raster data, WMS too. No query analysis yet but coming in next month or so. Landscape metrics, fragmentation of habitat, comparison of habitats of different species, predicted future climates…
Also have on-demand models. Niche-modelling on a web-service on a 64 node cluster. Anyone can use this, our archive has limitations, no significant quality control, assume it’s been done by museums, but could be more done really. On-demand service can be used by researchers with their own data perhaps at higher resolution.
Niche modelling becoming more popular because more specimens becoming available. Environmental and occurrence data, put into model and it calculates formula, also project onto a map to help visualise.
Data is more available as there’s a massive effort to database and geotag it. Might be paper catalogs as that’s how it’s been collected for 300 years, now putting into databases to digitise collections. Also exposing data via the internet using standard protocols. Slide shows examples of 3 collection that when put together give more powerful collection.
Several scenarios on economic development, regionalisation, environmental aspects, modelled by nobel prize winners with Al Gore. We use multiple scenarios and compare them “apples to apples”.
Use this distribution data, put together with ecological variables through a modelling algorithm to get ecological niche model of where the species would occur. Using 15-20 variables. Model is output through projection tool to project is onto a map.
Specimen point data is taken to create a model using an algorithm of the current environment, projected back to get distribution in the native region. Done with climate models get distribution after climate change. Significance is looking at non-native region can see what areas might be vulnerable to invasion by species after climate change.
Archive created with pipeline, constructs request, passed to cluster on a 64-node cluster, web services in front, nodes retrieve environmental data, using WCS, node dispatches to openModeller and pipeline polls for status and retrieves data and catalogs.
[Demo]
Exposing data on website but also exposed on web services, can see in google earth.
Showing samples that we have the data in the database but don’t have lat/lon, have around 80% of those. ~220 institutions are providing data, within those about 600 collections, fish, mammals, etc.
Other ways to access data is to request data, experiment consists of all the data and the models and maps produced on top. Just URLs, can be accessed programmatically.
Other thing is “submit an experiment”, constructing search parameters and get a URL back with information for this experiment. Get really basic data back for it, shown projected on 4 different climate scenarios, current and 3 future ones. Showing metadata for the collection and other properties.
Lifemapper 1.0
- Distributed computing
- Screen savers
- Competitive like SETI
- Captured wrong audience
- Limitations
People weren’t really interested in the topic, couldn’t handle the demands of the audience.
Lifemapper 2.0
- Funded, cluster computing, OSS, standards
Technorati tags: lifemapper, niche-modelling, where, where2.0, where2008
-
Where 2.0: InSTEDD: Humanitarian Collaboration Tales
Robert Kirkpatrick
We come from a background, created by Larry Briliant, to create a global network for early detection and early response.
InSTEDD Overview - launched 01/08, non-profit, funding from google.org, rockefeller.. agile, “innovation lab”
“From a faint signal to collective action”
Single event deemed noteworthy to collective action. Seems simply, very complex. Big chart, many parts. Integrating approaches is significant challenge. Collaboration is at the top of the stack. Focus of the organisation, on human integration aspects.
Must work with capabilities in many places, Iraq at temperatures 117F. “If you can make it work here, you can make it work anywhere”
We partner wherever we can with organisations working on technologies that may be useful, twitter, facebook, single use techs can be repurposed to other goals. We’re vendor agnostic, will use anything, and will innovate where we can.
To know these techs will work, we go where they are, to field labs. By failing quickly again and again we’ll create designs that can work when they’re needed and will be reliable.
Office just opened in Steung Treng Province, northern Cambodia. Sharing disease information across border with Lao. What does usability mean in an environment like this.
Recent project, set of liraries and applications, OSS FeedSync implementation to combine technologies to create mesh environment to provide data in low bandwidth environments. Have implementations in C#, Java, others. Brought up HTTP sync service at sync.instedd.com, we’re hosted on google gode too.
https://sahana.instedd.org - Cyclone Nargis response. Been working very hard, very little sleep, work done in past few days is astonishing.
We realised that geochat, messages over sms, wasn’t needed in Myanmar, really needed to translate into Burmese, where unicode code isn’t standardised, very difficult. What was needed was to organise rapid crowdsourcing over >11k lines. Google spreadsheet of names and getting feedback. True of slower moving scenarios as well as fast moving disasters, can’t predict what will work. Very important to apply learnings.
[Demo]
Google spreadsheet of translated text. Have used different platforms but there wasn’t a toolkit already to solve problem. Situation in these responses requires integrating many systems, never be one coordinating agency, reductionist approach just fails.
…
Google code project contains utility that gives a client to allow you to synchronise KML files with each other. Showing 3 KML files in google earth. Adding placemark. Save it. Can synchronise it two ways with another file, figuring out changes in the file and what is new, deleted, edited, and creating XML respresentation of changeset and changes are applied to other file. People can edit items in the field, pass a thumbdrive around and can collaboratively edit information. Guy put together 11Mb file by himself using contributions from other people. This type of technology allows anyone to make updates and he can merge them in. Can also sync up to a service, currently hosted by instedd but looking to do EC2 or S3 versions.
Now showing online RSS document keeping the KML information and versioning data. Decentralized versioning system. Can be put into Google Spreadsheet and it will still work. Need right adapters but we’re creating it. Would like to discuss with interesting partners.
GeoChat - hook up multiple gateways to website, have a gateway for twitter, can send a text message with lat/lon and message, or location name, and tags and this will be displayed in a string and can be displayed in KML or RSS. Showing in Cambodia. Showing team moving about in google earth. Can click reply and reply directly, via correct gateway, will go to twitter. Can also do group replies in a certain location or tag. All OSS and free.
InSTEDD: Humanitarian Collaboration Tales
Technorati tags: humanitarian, geochat, disaster, where, where2.0, where2008
subscribe via RSS or via JSON Feed