-
Maps, maps, maps..
I’ve been contributing to a project called “Open Street Map” (OSM) now for about seven months. OSM is attempting to create a map of the world that is available to everyone to use, and modify, for no charge. During my time mapping with OSM I’ve been able to vastly improve the coverage of Liverpool (my home city). With a lot of help from my Dad the coverage has gone from basically nothing in Liverpool to a point where two postcodes have been comprehensively mapped, and the south of Liverpool is looking really rich in data. See the animation below to see how coverage has increased over the months:
Click the animation for a larger, slower, non-repeating version)
Now while I’ve spent lots of my spare time in the last seven months mapping, I’ve also spent all of my work time doing map-related activities (considering I work for multimap.com that’s not a big surprise). In September 2006 we made the Multimap API v1.1 available to the public (v1.0 being a private beta). Over the following four months we worked hard to build on that release, responding to comments from our clients and to build in new ideas that we had come up with while designing our new public site (more on that in a moment…) After the release of v1.2 of the Multimap API (incorporating such features as marker decluttering, our own take on “hybrid” maps and a hugely functional database backed spatial searching module) I moved onto helping out on the build of the new public site.
Our new public site is a great leap from the previous site: big maps, easy access to useful information on the map and really easy to use personalised services such as favourites and search history. We got a huge response to the launch of the new site, lots of people were happy to finally see draggable maps on multimap.com and loved being able to see things like wifi hotspots and cash points highlighted. There was also a number of people that weren’t so happy about us making such a big change to a service they make so much use of, the biggest problem being something we’ve recently remedied - the lack of Ordnance Survey Landranger maps. Unfortunately we weren’t able to get contractual terms and pricing agreed with OS in time for the initial launch.
So, I’ve covered Open Street Map, and Multimap in this blog post. OSM are creating a truly global, good looking and, most imporantly, free map. Multimap have a really good front-end for displaying maps, searching for locations and finding routes. Wouldn’t it be nice to see OSM maps inside Multimap? Well, using a little JavaScript hackery, and the “custom map type” features of the Multimap API it’s actually possible. I’ve knocked up a little “bookmarklet” that you can add to your browser’s favourites. Once you’ve added it you can browse to multimap.com, click on the new favourite and you will see two new map types appear in the widget on the top left: “mapnik” and “osmarender”. Clicking on either of these will make OSM maps appear in the page. The two options are provided by the two different methods that are used for rendering these images, you will probably notice differences in style and coverage between them but rather than go into detail on why that is here I’m going to direct you to the relevant pages on the OSM wiki.
You’re probably eager to try this out for yourself now so here’s the bookmarklet:
EDIT - please see this later blog post for an updated version of this bookmarklet
Either right click the link and add it directly to your favourites or simply drag it to your Bookmarks Toolbar. Clicking on it on this page won’t do anything as there’s no map! I’ve tried this in Firefox and Safari, I’m pretty sure Internet Explorer can’t handle bookmarklets like this and I haven’t tried Opera.
Everything on the site should continue to work fine, such as location searches and routing, though the route highlight might not overlay perfectly on the OSM maps. Unfortunately you’ll get [object Object] in the URL bar but that doesn’t actually affect the operation of the site.
In it’s current form this won’t be appearing on the public site by default, at the moment the tiles are being pulled directly from the OSM servers so we wouldn’t want to suddenly divert large amounts of traffic to them. Multimap are big fans of OSM though and we do already have the capability to display different types of map on the site so who knows, maybe in the future…
Update: 2nd August 2007 - We’ve rolled out some style changes to the API that will break the old version of the bookmarklet. Get hold of the new copy using the OSMify link above.
Update: 13th February 2008 - I’ve taken the link out from above as there’s a new version described on this page.
Technorati tags: javascript, bookmarklet, osm, openstreetmap, map, liverpool
-
Getting out of the house
Just thought I’d drop a note on here about a few events I’m going to soon…
Firstly, and most importantly - GeekUp Liverpool - on Thursday (31st May). I’m going to be doing a presentation on Open Street Map, and hopefully a bit of a practical demonstration too. It’ll be similar to one I did in Manchester last month (which was itself taken from one done by Steve Coast). This time though my intention is that by the end of it, the attendees will be able to go out and get mapping straight away without needing much more help.
Secondly, I’m happy to say that I’m one of the chosen few and will be attending the Yahoo! BBC Hackday 2007 in London. This should be a really fun event if only because I’ll be able to get together and have a few beers with lots of my friends. I know I have friends from Multimap, Yahoo!, OSM and GeekUp all going so it should be a great laugh. No idea what I’ll be hacking on, if anything, but I guess something map related would be the most likely.
So hopefully I’ll see you at one of the above events, if you’re coming to either don’t forget to mark your attendance on upcoming.org, I know for the GeekUp one we’ll be relying on it to get a good idea of numbers.
-
Announcing... JS Call Tracer
After seeing Rasmus Lerdorf demonstrate running Apache in Callgrind to get a diagram of the function calls that were made in his PHP app I decided I wanted something similar for use with JavaScript. I did a few searches to find something that would do the job but unfortunately there didn’t seem to be much out there. I think I found a single script but it seemed pretty old, and required you to register each individual function that you wanted to trace, unfortunately I can’t find a link to it now.
I actually decided not to bother going any further with this until a few weeks later when I was asked to provide support for a large JS implementation that I had nothing to do with designing or putting together. Having no idea what was going on in the app I decided that I really needed a function call tracer.
I briefly looked into hacking support into Firebug directly, hopefully making use of it’s existing JS debugging and profiling support. Unfortunately as far as I could tell this support only goes as far as a boolean flag that tells Firefox to log the total time spent in each function. Thus I knocked up a utility to do it myself.
It’s quite basic, and doesn’t work in all circumstances, but hopefully it should be useful to someone. I’ve setup a special page for it so that this blog page doesn’t go on too much, so head over there and take a look. Feel free to drop any comments in this blog posting though.
-
Geeking it up in t'North
I attended my first northern geek night tonight - GeekUp. I wasn’t really sure what to expect, I’m sad to say that in the past I’ve thought that there wasn’t really many geeks or general techies in the north of England that really knew what they were doing, or were anywhere near the cutting edge. I’m not entirely sure where I got that idea from, not having actually met all that many techies from the area, but I’d have to guess it’s something about the companies I’ve either had interviews with, or seen jobs advertised for, over the years.
I’m glad to say that my experience tonight has definitely changed my viewpoint. I only spoke to around five of the 20 or so people who attended, but I definitely got the impression that these people knew what was going on in the world of web and were interested in getting involved. I was also glad to see that the topics of the talks were on interesting relevant subjects which I’ll give a little run-down of now:
Firstly Dan Hardiker of Adaptavist showed us how to debug web pages using Firebug. Now Firebug is nothing new to me considering all I do these days is write JS, but Dan gave us a fast-paced run through of some of the most important features of firebug and he still threw in a few things that surprised me, especially with just how many ways you can the visible page using Firebug. Also interesting were the ways in which he uses Firebug to quickly get around a slow release process when making simple changes to a page (well in fact he uses it for some quite complex changes too, including running the JS of an entire page from scratch). That’s definitely one I’ll be looking at myself with a site I’ve been working on that in the development version can take a long time to load in the numerous JavaScript files.
Secondly John Baxendale gave us a quick run through about why we should use code versioning and why we should use Subversion to do it. I was looking forward to this initially as Subversion isn’t something I’ve used much, using CVS in the past and Perforce currently. He kept it fairly low on details, which is fair enough as you can never be entirely sure how much experience your audience has of a subject, but he did highlight some very good reasons for using code versioning systems and gave a quick run through of the best ways to use Subversion. He also gave us some ideas of how best to serve a Subversion repository (Apache/mod_dav/SSL or SVN+SSH) and also good ways to access one (SVN command line/TortoiseSVN/SCPlugin).
Lastly Dave Verwer gave us a talk about OpenID. OpenID is something that I’m quite sure most web developers have invented in their heads on their own, but have never got as far as doing anything with. I personally went as far as looking into what might be available around a year ago and discounted OpenID as not really going about things the right way (probably my own misunderstanding). Things have really changed though over the past year, and especially over the past few months, and it’s really come to the point where you can say that OpenID is a definite solution to some of the problems of having multiple accounts on the web, and has the potential to fix many more.
Dave gave a quick run-down on how OpenID works followed by a few demos showing how you would sign up to an OpenID provider and also how you log into a site using an OpenID. There was a few technical problems with the web site he was actually trying to log into (not a site that he has been involved with building) but he did manage to demonstrate the general principles of OpenID and accomplished logging into his chosen site without having to put any more than an OpenID URL into the sign-in box.
I talked to a few people in the pub afterwards about OpenID in general and unfortunately I can’t say that everyone was convinced. I think the real problem at the moment is more with the lack of take-up by websites, rather than any deficiencies with the technology (which in my opinion is definitely up to the job). As OpenID support grows though I think a lot more of the smaller website developers (and basically if you’re not working for Yahoo, Google, Microsoft or AOL then I’m talking to you) will realise that it has huge potential to increase their user-base by making it significantly easier for potential users to get in and start using their services.
Well, I have to say that I had an enjoyable night tonight and will definitely be up for attending in the future assuming I’m free on the second Tuesday of the month. I’m also very interested in the prospect of doing something similar in Liverpool in the near future, it turns out there’s even other geeks in my own city!
(After my comments about northern techies at the beginning of this post I realise I should probably add some sort of disclaimer. I’m a born and bred northerner so it’s not simply that I have anything against northerners. Also if you think I was talking about you, I wasn’t, I was talking about that guy next to you ;-)
Technorati tags: geekup, openid, firebug, firefox, subversion, svn, northern, england, manchester
-
Problems with Safari and innerHTML
Yesterday I had to look into a problem that was occuring in Safari with some code that I had written that was using the innerHTML property. This code works by creating some HTML using DOM functions, then adding some 3rd party HTML to one of the nodes. Though in most cases the 3rd party content is likely to be just a number, I wanted to allow for more interesting content, such as HTML entities, so I decide the simplest method would be to use innerHTML rather than parsing the content and generating the DOM nodes myself.
The code has been in use for several months without issue and has been working fine all this time in all the browsers I’ve tried it in. For some reason in this one particular scenario though, Safari was completely ignoring the attempts to set the innerHTML of the node. Setting the innerHTML and then on the following line attempting to read it was also giving an empty response. For example:
I tried numerous methods to fix this problem, including setting the property before and/or after adding the DOM node to the document. I also googled it which flagged up a number of related posts but these generally referred to pages that were served as XHTML (ie. pages with an .xhtml extension and MIME type application/xhtml+xml) and were not offering any solution.
One thing I did notice, though, was that if I typed into the address bar of Safari something along these lines:
Then my change would occur, so it seemed that Safari was happy enough to set the innerHTML of the node, just not at the time I wanted it to, which led me onto my solution:
Yes, setting a timeout to occur in 50ms at which point the innerHTML would be set seemed to work. No idea why, and it is a bit nasty, but it was the only solution I could think of, and more importantly - it worked!
So hopefully if you see a similar problem and you need a solution, you’ll find this page and be able to make use of my dirty hack above to fix it.
Update
Firstly - something I should have mentioned. This problem has only been occurring since we started modifying the URL of the page from within JavaScript, i.e.
window.location.hash = '#foo'
. We knew this didn’t work exactly as we’d hope, as the spinning status indicator stays spinning as soon as we modify the location, but we really really wanted to modify the location. Disabling the URL modifying does fix the .innerHTML problem though.Secondly - I recently noticed further problems. I was using code exactly like that above, setting the .innerHTML property, then setting it again 50ms later. Unfortunately this can have problems too because the first attempt might work, but the second one fail, leaving you with no content again! Also I have seen it fail twice in succession, only setting the content on the third time. The horrible solution I’ve ended up with is a recursive method that sets the .innerHTML, checks to see if it was set correctly (it will be empty if not) and sets a timeout to call itself after a short pause if it didn’t work. I have put a limiting factor of 5 loops on to make sure it doesn’t run forever, and I do check whether the value you’re trying to give to the content is empty as well. The complete code for the function is as follows then:
Technorati tags: javascript, safari, bug, browser
-
FOWA: The day I met Rasmus Lerdorf
I recently attended the “Future of Web Apps” conference organised by the friendly guys at Carson Systems. It very much consisted of ups and downs, I think I’ll just summarise as follows:
Ups:
-
last.fm talk: in depth history, tech talk, passionate guys who really enjoy doing what they’re doing
-
openid: it’s there! basically everybody that matters in the world is now behind it, yay!
-
soocial and social: the guy from soocial.com gave a talk that was a breath of fresh air, but really my best bit of the conference was meeting up with friends and heading to the pub afterwards
Downs:
-
no free wifi: no free wifi, no free wifi sniff (stupid BT; Ryan - if only you’d told us more straight away)
-
odd mix of talks: looking back at the speaker list I’m sure that more than half of the talks should have been ok, but that’s not the experience I remember, I think a non-techy focus simply left some talks missing depth
Also, as I mention in the title, I got to meet Rasmus Lerdorf the creator of PHP. The first 5 years of my working life were spent building Content Management Systems using PHP so I was quite glad to meet him (in fact I told him that he’d “made my career” though I may have meant it a little tongue-in-cheek). I shook his hand, name dropped Stig Bakken (who used to work for my parent company and with whom I had maybe 2 IM exchanges) and told him a little about my past use of PHP. Don’t know if it meant much to him, with Stefan of moo fame thanking him for the past 10 years of PHP I doubt it did, but I’m happy I met him anyway.
To finish off, I couldn’t talk about FOWA without linking through to Stuart’s excellent notes from the days. I’ve already used them a number of times to remind me of things I heared.
Technorati tags: fowa, fowalondon07, php
-
-
Must.. blog.. more.. often
…and so I thought I would recount a memory from a cycle trip last summer:
In June 2006 I cycled from Hornsea on the east coast of England to Southport on the west coast of England. I followed a route called the Trans Pennine Trail which is a 213 mile (344km) route across the country that takes you across the pennines, on roads, along canals and following the paths of disused railways. I know that it was a long and tiring trip, but since doing it I’ve manage to forget most of the bad parts and now remember it as a great feat that I would even consider again (apart from the fact that there’s lots of other things I could do instead).
Until just a few minutes ago when for no particular reason I remembered something from the first day (which is always the worst as you’re just getting used to spending the day doing exercise). It could only have been a few hours into the cycle but I remember saying to my brother in law: “The best thing about this type of trip is the planning isn’t it? Maybe next year we should plan an amazing trip cycling across Cyprus, go into full detail finding equipment and flights, then go to the pub.”
This year I’ve decided that I’m going to try to enter the London to Brighton bike ride. That one’s “only” 55 miles, but it’s still more than I’d done when I made the aforementioned statement. It’s also a race. I’m fairly hopeful that I’ll be able to get through it but I’m definitely going to have to start training soon.
Wish me luck…
Technorati tags: blog, cycling, transpenninetrail
-
Microformats, Tails Export, Bluetooth
I was recently sent a link to this article. It basically describes an application that has been launched by the Bluetooth SIG that allows you to “embed a chunk of data … in a web page and have it copied to a mobile phone at the click of a mouse”. Although I must say I didn’t RTFA I did think to myself “What’s the point in this? Why not just link to a vCard, download it then send it to your phone?” I basically decided that all it did was convert a 2 step process (download file, send file by bluetooth) into a 1 step process.
After thinking about it a bit more (and, yes, still not reading the article, I’m very much of the slashdot generation) I realised that there was a bit of a similarity between this and microformats. I got to thinking about knocking up a program that would skim over a web page, pull out all the microformats on it, and allow you to send these using bluetooth. After a mention from Andy Hume and a visit to the WSG Meetup on Microformats I decided to have a go at hacking this support into tails export (edit: a great Firefox extension that already exists for pulling microformats out of a page and exporting them to your computer).
One slight problem I had to start with was that the export feature of tails export doesn’t actually work with OS X. Tails export appears to work by creating a file containing your event or address information, then setting the URL of the browser to point at that file. Quite ingenious really, it saves having to worry about what program to load the file in as the browser should take over and load the file in whatever program you have setup to handle it on your system. Unfortunately that doesn’t appear to work on OS X. On a mac, on the command line, we have the nice ‘open’ command which does exactly what we want, opens a file in the associated program. So all I needed to do was figure out how to make tails export use that when run on a mac. After spending a frustrating few hours on a train on Friday night trying to research this using the Opera mini browser on my phone, I finally found the solution on Saturday morning. “nsIProcess” is an XPCOM component that you can use to fire off programs from within a Mozilla based browser. I found a handy example online and then managed to add the support to tails export without too many problens (ok, maybe there was an hour of trying to figure out why it wouldn’t open in the Bluetooth File Exchange app which turned out to be because I was passing a file:// URL).
So, that’s about it, I’ve packaged up a new version of tails export, you can download/install it as tailsexport-0.3.1-jmck.xpi (update: see bottom of article). I make no guarantees for the reliability of this extension, if it blows up your computer and deletes all your blog postings, I’m taking no responsibility. This is the first time I’ve touched a Mozilla/Firefox extension so I have no idea if I’ve done things the right way. That said, it works nicely for me, my Sony Ericcson k750i works well with the vCards though unfortunately doesn’t seem to accept the iCal stuff. I’ll let Robert de Bruin know and see if he’d like to add my changes to the official build.
As a last note I thought I’d mention that I did eventually read the article that I’ve linked to right at the top of this page. I thought I should before writing this blog posting just in case it turned out that the application was doing something amazingly clever that I’d just missed. What they appear to be doing is having developers include a JS file on their page, then when users click the “TransSend” button some JS has to be run to either pick the bits of information out of the page and construct a vCard on the fly, or use a pre-built hardcoded vCard (I haven’t yet found docs on how to embed TransSend stuff yourself though, just looked at the examples). This seems like a really nasty solution to me though and I really wish they had heard about Microformats at the Bluetooth SIG. Microformats look like the perfect solution for them, just a shame they are a bit behind the times!
Update! 26th October 2006 - As I mentioned above the extension didn’t work for exporting calendar events to my phone. As it turns out it didn’t work for sending vCards to Nokia phones either (thanks Stuart for testing that). I’ve put small updates in to fix these problems which you can find in tails-export-0.3.1-jmck2.xpi. The calendar events problems were caused by a file extension problem, my phone only accepts .vcs not .ics so I’ve simply changed the file extension for now. Also the Nokia problem was due to bad line endings, I’ve fixed this using a JS replace but hopefully I’ll fix this in a better way when I update to Brian Suda’s latest XSL.
Update 2! 26th October 2006 - It seems Nokia phones are still having problems which appear to be due to a lack of support for URIs for the PHOTO value (and possibly other values). Also note that there are character set issues too, but these should be fixed when I update to the newer XSL.
Technorati tags: microformats, bluetooth, firefox
-
Resizing IFrames Across Domains
While recently looking into Google Gadgets and pondering on the possibility of an open gadgets API that allowed you to place gadgets on your own site that were hosted elsewhere, I came up with a method that can be used for resizing IFrames where the page within the IFrame is hosted on a different domain.
A simple search for “resizing iframes” on Google returns 581,000 results, and the first one that I found gave a perfectly reasonable method for resizing iframes when all the content is stored on a single domain. Unfortunately due to the security restrictions in modern browsers, JavaScript in a page hosted on one domain cannot access a page hosted on an alternative domain, even if you really really want it to. But there is a way to allow communication.
So first, before I describe the solution I’ll mention a caveat - this solution relies on the page in the iframe doing something. This means you can’t have resizable iframes for just any content - either you have to have control of the content in the iframe or its owner has to make functionality available to you, but I think that’s a good security restriction. You have to trust the content you’re linking to if you’re going to implement this.
So - on to the solution. Put an iframe in your iframe. That’s it, that’s all you need to do. Come back next week for how to make Windows completely safe from viruses and Mac OS popular………..
What? You want more details? How about if I tell you to make the inner iframe link to something in the main window’s domain, does that help?
The diagram above aims to show you what I mean by this. You can also see this example in action. The main page in the example is served by the host mcknut.googlepages.com. It in turn has an iframe on it with some content from www.johnmckerrell.com. The three links inside the iframe try three different methods for resizing the iframe.
-
The first link tries to simply access the parent window directly. This will not work on any modern browsers as it will be flagged as a security violation.
-
The second link tries the best solution. There is a hidden iframe inside the main iframe. When you click on the second link iframe-resize1.html is loaded into the hidden iframe from mcknut.googlepages.com. iframe-resize1.html is passed a height value from the main iframe and then using JavaScript can access the main window, instructing it to resize the iframe. This is not flagged as a security risk because the domain of the page in the hidden iframe is the same as that of the main window.
-
The third link tries another alternative which also works, but is slightly more cumbersome. This version loads the resizing page in the main iframe. The resizing page is served from the same domain as the main page and calls some JavaScript on the main page to resize the iframe. It then uses a callback mechanism to return the iframe to the original page. This is more cumbersome as it has to reload the content in the iframe, and would not be suitable for an iframe with dynamic content that might need resizing numerous times.
So, to wrap up, the second solution mentioned above is the ideal way, as it resizes the iframe without affecting the content within it. I see a definite security problem here if you have multiple iframes with content from multiple domains in, as you might find that any iframe would end up with the ability to impersonate any other iframe. I think this could be solved though by labelling the iframes using random strings and using these strings when communicating from iframe to main page. I’m also not entirely sure about the validity of the window.parent.parent.updateIFrame() line. As you can see from the diagram below, it looks like I’m being allowed to cross domains in my JavaScript which of course should not be possible. If this turns out to be a security violation that browsers are likely to fix, though, I think window.top.updateIFrame() would suffice in most circumstances, and should in theory be fine.
Please leave comments to let me know what you think of this solution. I’ve not been able to find it in use anywhere else but if you’ve seen something similar before then let me know. Also if you have any suggestions for improvements I’d be happy to hear them. I’ve been intending to have a go at a mechanism to allow two way communication between the iframe and the main page (currently you still wouldn’t be able to pass information into the iframe from the main page) but I haven’t had time to knock something up so if you do, let me know.
UPDATE: IFrame Example Code - I’ve now collected the files together into a zip file that you can download. You will need to change the domains (mcknut.googlepages.com and johnmckerrell.com) to your own domains though for it to work correctly.
UPDATE2: Basic iframes demo - I’ve stripped the example down to the three files you need and structured them into a hierarchy that should make it clearer which files go on which domain.
-
-
New Blog, Old Blog
So I’ve decided to replace my previous blog with a standard WordPress blog.
My previous blog made use of whizzy fun modern technologies to allow me to host my entire site on Google Base, Google Pages and del.icio.us. Unfortunately, because it was something I had just knocked up there was no comment support, and of course it did require Java and JavaScript to be enabled in the browser.
Thus I’ve given up on it for now and moved over to WordPress. Who knows, maybe I’ll give up on this when I realise I have no idea how to customise it to my liking, or when something even more whizzy and fun comes along, but for now, this is it. If you would like to take a look at my older site, you can find it at mcknut.googlepages.com.
subscribe via RSS or via JSON Feed