March 2007

After seeing Rasmus Lerdorf demonstrate running Apache in Callgrind to get a diagram of the function calls that were made in his PHP app I decided I wanted something similar for use with JavaScript. I did a few searches to find something that would do the job but unfortunately there didn’t seem to be much out there. I think I found a single script but it seemed pretty old, and required you to register each individual function that you wanted to trace, unfortunately I can’t find a link to it now.

I actually decided not to bother going any further with this until a few weeks later when I was asked to provide support for a large JS implementation that I had nothing to do with designing or putting together. Having no idea what was going on in the app I decided that I really needed a function call tracer.

I briefly looked into hacking support into Firebug directly, hopefully making use of it’s existing JS debugging and profiling support. Unfortunately as far as I could tell this support only goes as far as a boolean flag that tells Firefox to log the total time spent in each function. Thus I knocked up a utility to do it myself.

It’s quite basic, and doesn’t work in all circumstances, but hopefully it should be useful to someone. I’ve setup a special page for it so that this blog page doesn’t go on too much, so head over there and take a look. Feel free to drop any comments in this blog posting though.

I attended my first northern geek night tonight – GeekUp. I wasn’t really sure what to expect, I’m sad to say that in the past I’ve thought that there wasn’t really many geeks or general techies in the north of England that really knew what they were doing, or were anywhere near the cutting edge. I’m not entirely sure where I got that idea from, not having actually met all that many techies from the area, but I’d have to guess it’s something about the companies I’ve either had interviews with, or seen jobs advertised for, over the years.

I’m glad to say that my experience tonight has definitely changed my viewpoint. I only spoke to around five of the 20 or so people who attended, but I definitely got the impression that these people knew what was going on in the world of web and were interested in getting involved. I was also glad to see that the topics of the talks were on interesting relevant subjects which I’ll give a little run-down of now:

Firstly Dan Hardiker of Adaptavist showed us how to debug web pages using Firebug. Now Firebug is nothing new to me considering all I do these days is write JS, but Dan gave us a fast-paced run through of some of the most important features of firebug and he still threw in a few things that surprised me, especially with just how many ways you can the visible page using Firebug. Also interesting were the ways in which he uses Firebug to quickly get around a slow release process when making simple changes to a page (well in fact he uses it for some quite complex changes too, including running the JS of an entire page from scratch). That’s definitely one I’ll be looking at myself with a site I’ve been working on that in the development version can take a long time to load in the numerous JavaScript files.

Secondly John Baxendale gave us a quick run through about why we should use code versioning and why we should use Subversion to do it. I was looking forward to this initially as Subversion isn’t something I’ve used much, using CVS in the past and Perforce currently. He kept it fairly low on details, which is fair enough as you can never be entirely sure how much experience your audience has of a subject, but he did highlight some very good reasons for using code versioning systems and gave a quick run through of the best ways to use Subversion. He also gave us some ideas of how best to serve a Subversion repository (Apache/mod_dav/SSL or SVN+SSH) and also good ways to access one (SVN command line/TortoiseSVN/SCPlugin).

Lastly Dave Verwer gave us a talk about OpenID. OpenID is something that I’m quite sure most web developers have invented in their heads on their own, but have never got as far as doing anything with. I personally went as far as looking into what might be available around a year ago and discounted OpenID as not really going about things the right way (probably my own misunderstanding). Things have really changed though over the past year, and especially over the past few months, and it’s really come to the point where you can say that OpenID is a definite solution to some of the problems of having multiple accounts on the web, and has the potential to fix many more.

Dave gave a quick run-down on how OpenID works followed by a few demos showing how you would sign up to an OpenID provider and also how you log into a site using an OpenID. There was a few technical problems with the web site he was actually trying to log into (not a site that he has been involved with building) but he did manage to demonstrate the general principles of OpenID and accomplished logging into his chosen site without having to put any more than an OpenID URL into the sign-in box.

I talked to a few people in the pub afterwards about OpenID in general and unfortunately I can’t say that everyone was convinced. I think the real problem at the moment is more with the lack of take-up by websites, rather than any deficiencies with the technology (which in my opinion is definitely up to the job). As OpenID support grows though I think a lot more of the smaller website developers (and basically if you’re not working for Yahoo, Google, Microsoft or AOL then I’m talking to you) will realise that it has huge potential to increase their user-base by making it significantly easier for potential users to get in and start using their services.

Well, I have to say that I had an enjoyable night tonight and will definitely be up for attending in the future assuming I’m free on the second Tuesday of the month. I’m also very interested in the prospect of doing something similar in Liverpool in the near future, it turns out there’s even other geeks in my own city!

(After my comments about northern techies at the beginning of this post I realise I should probably add some sort of disclaimer. I’m a born and bred northerner so it’s not simply that I have anything against northerners. Also if you think I was talking about you, I wasn’t, I was talking about that guy next to you 😉

Technorati tags: , , , , , , , ,

Yesterday I had to look into a problem that was occuring in Safari with some code that I had written that was using the innerHTML property. This code works by creating some HTML using DOM functions, then adding some 3rd party HTML to one of the nodes. Though in most cases the 3rd party content is likely to be just a number, I wanted to allow for more interesting content, such as HTML entities, so I decide the simplest method would be to use innerHTML rather than parsing the content and generating the DOM nodes myself.

The code has been in use for several months without issue and has been working fine all this time in all the browsers I’ve tried it in. For some reason in this one particular scenario though, Safari was completely ignoring the attempts to set the innerHTML of the node. Setting the innerHTML and then on the following line attempting to read it was also giving an empty response. For example:

text = "foo";   
node.innerHTML = text;  
alert( "html="+node.innerHTML ); // Pops up message saying "html="

I tried numerous methods to fix this problem, including setting the property before and/or after adding the DOM node to the document. I also googled it which flagged up a number of related posts but these generally referred to pages that were served as XHTML (ie. pages with an .xhtml extension and MIME type application/xhtml+xml) and were not offering any solution.

One thing I did notice, though, was that if I typed into the address bar of Safari something along these lines:

javascript:node.innerHTML = "foo"

Then my change would occur, so it seemed that Safari was happy enough to set the innerHTML of the node, just not at the time I wanted it to, which led me onto my solution:

setTimeout( function() { node.innerHTML = text; }, 50 );

Yes, setting a timeout to occur in 50ms at which point the innerHTML would be set seemed to work. No idea why, and it is a bit nasty, but it was the only solution I could think of, and more importantly – it worked!

So hopefully if you see a similar problem and you need a solution, you’ll find this page and be able to make use of my dirty hack above to fix it.


Firstly – something I should have mentioned. This problem has only been occurring since we started modifying the URL of the page from within JavaScript, i.e. window.location.hash = '#foo'. We knew this didn’t work exactly as we’d hope, as the spinning status indicator stays spinning as soon as we modify the location, but we really really wanted to modify the location. Disabling the URL modifying does fix the .innerHTML problem though.

Secondly – I recently noticed further problems. I was using code exactly like that above, setting the .innerHTML property, then setting it again 50ms later. Unfortunately this can have problems too because the first attempt might work, but the second one fail, leaving you with no content again! Also I have seen it fail twice in succession, only setting the content on the third time. The horrible solution I’ve ended up with is a recursive method that sets the .innerHTML, checks to see if it was set correctly (it will be empty if not) and sets a timeout to call itself after a short pause if it didn’t work. I have put a limiting factor of 5 loops on to make sure it doesn’t run forever, and I do check whether the value you’re trying to give to the content is empty as well. The complete code for the function is as follows then:

 function setInnerHTML( element, html, count ) {
     element.innerHTML = html;
     if( ! count )
         count = 1;
     if( html != '' && element.innerHTML == '' && count < 5 ) {
         setTimeout( function() {
             setInnerHTML( element, html, count );
         }, 50 );

Technorati tags: , , ,