HTML5 Geolocation API – how accurate is it, really?

If you are a developer building applications that require location information then you need to know what is really possible with the HTML5 Geolocation API and not a bunch of hype. The blog post attempts to give you some insight into how it works with desktop and mobile browsers as well as having a greater appreciation for what is and what isn’t possible. I’m going to show you that accuracy depends on many factors, some of which are beyond your control, and at best the location information returned by the API is just an approximation.

[Editors note: as of June 29th, 2013 Part 2 of this post is now available]

Most common use case. For the most part, HTML5 Geolocation works just fine in dense urban areas when you are stationary with your laptop or smartphone Wifi turned on. This is the use case most commonly cited when questions are asked about accuracy. This makes sense because urban areas have many public and private Wifi routers and cell phone towers are typically closer together. As you’ll see, HTML5 uses these and other methods to pinpoint your location. However, it’s not always that simple and below are some other use cases that you should take into consideration.  

How does the API work? Depending on which browser you are using, the HTML5 Geolocation API approximates location based on a number of factors including your public IP address, cell tower IDs, GPS information, a list of Wifi access points, signal strengths and MAC IDs (Wifi and/or Bluetooth). It then passes that information to a Location Service usually via an HTTPS request which attempts to correlate your location from a variety of databases that include Wifi access point locations both public and private, as well as Cell Tower and IP address locations. An approximate location is then returned to your code via a JavaScript callback.

As an example to show you what type of information is sent to a Location Service, I did some basic testing with Firefox 11. Firefox uses Google’s Location Service. On a related note, as far as I can tell with Firefox 11 it isn’t passing cookies any more where in Firefox 3.6 they use to pass a user ID token.

Firefox 11 browser sends queries to https://maps.googleapis.com/maps/api/browserlocation/json? The example results have been obfuscated, but by looking at it you should get the idea of what content is being sent:

GET /maps/api/browserlocation/json?browser=firefox&sensor=true&wifi=mac:01-24-7c-bc-51-46%7Cssid:3x2x%7Css:-37&wifi=mac:09-86-3b-31-97-b2%7Cssid:belkin.7b2%7Css:-47&wifi=mac:28-cf-da-ba-be-13%7Cssid:HERESIARCH%20NETWORK%7Css:-49&wifi=mac:2b-cf-da-ba-be-10%7Cssid: ARCH%20GUESTS%7Css:-52&wifi=mac:08-56-3b-2b-e1-a8%7Cssid:belkin.1a8%7Css:-59&wifi=mac:02-1e-64-fd-df-67%7Cssid:Brown%20Cow%7Css:-59&wifi=mac:2a-cf-df-ba-be-10%7Cssid: ARCH%20GUESTS%7Css:-59 HTTP/1.1

Which location service do browsers use?

Not all Geolocation services are the same, and they certainly don’t all use the same algorithms and exact same databases. Because of this the results typically vary across browsers that use different Geolocation services.

Here’s my best attempt to document which Geolocation service each of the major browsers are using. I haven’t done any definitive testing however I do know from experience that different browsers and even different laptops for smartphones will return different locations when tested from the exact same location. Some location services are better in some cities and others are better in other cities. I haven’t come across a definitive list, most likely because the information is constantly being updated. I’ve included a link to a demo application at the bottom of this blog where I encourage you to also test the API against different browsers.

  • Chrome uses Google Location Services.
  • Firefox on Windows uses Google Location Services.
  • Firefox on Linux uses GPSD – http://catb.org/gpsd/. I’m not sure if this includes Android. I haven’t had a chance to test it yet.
  • Internet Explorer 9+ uses the Microsoft Location Service.
  • Safari on iOS uses Apple Location Services for iPhone OS 3.2+.
  • I’m not sure what Safari on Windows uses. With all the public distrust between Apple and Google, I wouldn’t be surprised if Safari on Windows also uses Apple’s Location Service, but I haven’t found any documentation to verify this and I haven’t tested it.
  • Opera uses Google Location Services. On a related note, I’ve also noticed that mobile Opera on Android accesses the GPS. This is something to consider from a battery usage standpoint.

Not all browsers support HTML5. It’s important to note that not all browsers support the HTML5 Geolocation API, for example Internet Explorer 8. The HTML5 Geolocation API is built into the browser and is accessible using JavaScript methods that access the navigator object. In order to work it requires HTML5 support in the browser. You can research whether or not a particular browser supports Geolocation by going here: http://mobilehtml5.org/ or http://caniuse.com.

Additionally, if a user has disabled JavaScript for some reason, then your Geolocation app won’t work in their browser. JavaScript code is required to access the API.

HTML5 Geolocation requires an internet connection. If you lose your internet connection then you won’t be able to access the Location Service. With no internet connection most browsers will not return a location. Sometimes you can access a cached location that is stored in the browser by the API. But, that cached location is the last valid location that was calculated by the API.

Is Wifi turned on or off? If Wifi is turned off on your phone, desktop machine, laptop or tablet , the Geolocation API service will try to find your location by other methods which include your public IP address, Cell tower ID triangulation or GPS. Public IP addresses databases usually return a location for your internet providers Point of Presence or PoP. Furthermore, some internet provides offer rotating IP addresses. So you get to use one IP address for a particular time period such as 48 hours and then you get a different one. So a Public IP address is usually only good enough to locate you to a particular City, or a general area of the City, or a Country depending on where you are in the world.

As for Cell Tower IDs it depends on what type of information your particular phone and Telco Carrier provides to the API. Some smartphones only return information on the current tower that the phone is pinging, which obviously makes triangulation very difficult and decreases accuracy to within a radius around that tower.

I’ve noticed that the native Android browser is significantly less accurate without Wifi. Without it I typically see accuracy numbers in the 1000+ meters range. As soon as I turn Wifi back on and I’m in a neighborhood or downtown area, the accuracy drops to less than 75 meters almost instantly.

Are they in a rural or urban location? Granted the vast majority of users will be in urban locations. However if you have requirements for users traveling outside of urban areas then this section applies to you. Geolocation in rural areas is significantly less reliable. If Wifi is turned on but the user is not near any Wifi access points, then the Geolocation service will also attempt to fallback to the other methods mentioned above.  Triangulation can be much more difficult in rural areas where towers are spread further apart, and for browsers that don’t use GPS the accuracy will suffer significantly.

Are you moving or stationary? Being stationary in an urban area offers far better accuracy with the Geolocation API than when you are moving. On my native Android phones it’s rare to get an accurate reading while driving around town. Occasionally a sporadic result would be returned when you stop at a light. To date, I have never gotten a valid reading while driving on a highway at speeds over 50 mph.

Is a VPN turned on? If a VPN is turned on, then the location will resolve to the VPN’s public IP address. For example, a user in Denver is logged into the company VPN which host is hosted at their headquarters office in a suburb of Dallas, Texas. The HTML5 Geolocation API will resolve the location to the headquarters public IP address in Dallas and not the user’s actual location. Quite a few corporate users have VPNs for security reasons.

Custom Geolocation as a fallback? Depending on your requirements you may want to implement your own IP Geolocation using a company such as IP2Location. Or use a third-party Geolocation service, such as Skyhook, as a fallback. Remember IP Geolocation only returns locations to a City or an area within a City. So, if you need more accuracy than that for your application, then don’t bother with this approach.

The downside to custom IP Geolocation is that this requires writing a server-side service to grab the browsers IP address. All server-side languages such as PHP, C#.NET, Java and JSP support these capabilities. You also have to subscribe to another service that lets you query their database by IP address and get a return value of an approximate location. There is no current way to get this information from the browser, on the client-side, using JavaScript.

HTML5 Geolocation doesn’t meet my requirements, what do I do? If you have critical requirements for gathering more precise location information than the HTML5 Geolocation API is capable of delivering then I’d recommend building your application using a native API such as Android or iOS.

How can I test this? You can test HTML5 Geolocation in different browsers using a test application that I built. I recommend trying it on different browsers and comparing the results yourself:

http://andygup.net/samples/html5geo/

References

Mozilla FAQ

Mozilla Developer Network

Google Location Service

W3C Geolocation API in IE 9

Safari Developer Library

Opera Geolocation

IP Geolocation

W3C – Privacy of Geolocation Implementations

Apple Q&A on Location Data

The 1 Minute Primer for HTML 5

HTML 5 is getting a lot of press these days and I get a constant stream of questions from many non-techies, as well as developers, asking me to explain HTML5 in layman’s terms. So here it is.

HTML 5 is really a combination of three things: HTML, CSS and JavaScript. When all three of these technologies work together in a web browser then you have an HTML5 application. Period.

Why should we care about HTML 5? HMTL 5 brings many long awaited enhancements that make it easier for web developers to build more complex applications. More importantly, HTML 5 is being adopted by the major browser vendors: Google, Microsoft, Mozilla and Apple and this adoption is making it possible for developers to take advantage of the latest web technology that are built into web browsers.

How is HTML 5 “built into a web browser”? Web browsers have to interpret a web page first, and then display the content for you. Browsers contain logic that let’s them parse a pages’ code, and that code provides instructions for the browser to do certain things. Behind the scenes, in fact, the page you are looking at is built using code. It’s the browser that interprets the code and displays it in a way that makes sense to you. If you haven’t ever seen web page code then you can usually select View > Source on your browsers tool bar. Cool, right?!

HTML 5. HTML 5 is the latest version of the Hypertext Markup Language (HTML) specification which has been around in various forms since approximately 1991. HTML is a tag based language that defines the meaning and placement of elements of a web page. For example, a <button> tag defines a clickable button on a web page.

Cascading Style Sheets (CSS). Cascading Style Sheets, or more specifically CSS version 3 (a.k.a CSS3), provide the ability to apply styling to HTML elements. An example of styling would be to change the color of an HTML <button> from grey to green, as well as defining where on a web page it will be visible such as the top left corner.

JavaScript. JavaScript, which is really the meat behind HTML 5, is a type of programming language that lets developers implement actions within a web page. An example of an “action” would be when a web page visitor clicks a button that loads a picture. So, HTML defines the <button>, CSS styles the button, and JavaScript handles the action behind the scenes by retrieving the picture and then telling the browser how to display it for the end user.

This all sounds great, are there any downsides? Yes. First, HTML 5 is a standards-based specification that is still a work in progress. The specification and all its’ associated parts won’t be finalized for some time, possibly years. The good news is that browser vendors are keen to adopt this standard as much as possible. Second, implementation across different browsers isn’t 100% consistent. The good news is that there are tools and online resources to help developers work around many of these problems. Last, older versions of browsers (e.g. Internet Explorer 7 or 8, older versions of Safari, etc) don’t support HTML 5. There are strong campaigns under way to educate people to upgrade for security, performance and viewing experience.

So, there you have it. That’s a cursory pass at HTML 5 and I hope this post helps. I’ve added a few links at the bottom if you want to learn more about it.

Learn More:

 HTML5Rocks.com – includes information on features, tutorials and great slide decks.

w3Schools.com –  includes live “Try it” samples that let you explore the functionality.

W3C HTML 5 Specification –the World Wide Web Consortium is the group that writes the standards. If you are a techie, this is “the” specification that the browser vendors base their functionality on.

Top Five Resources for HTML5 Developers

Whether you are just learning about HTML5 or you’re cranking out code and don’t want to be slowed down, this is my 2012 short list of definitive HTML5, CSS3 and JavaScript resources that you need right now, at your fingertips, as you’re developing apps. I suggest bookmarking all of these web sites. If you use other sites that rock, please leave a comment with links below!

  1. Canisuse.com – This is an awesome comprehensive site where you simply type what feature you are looking for in the search box and the page will show a table outlining which browsers support that feature.
  2. HTML5Rocks.com – From interactive presentations and tutorials to code playgrounds, this site is a great place to learn more about HTML5.
  3. W3Schools.com – Excellent resource for beginners and experts. This site has embedded “Try it yourself” samples that you can modify on the fly. This site also includes a handy HTML5 tag reference
  4. CSS3.info – Previews, module status, articles…this site is a great resource for all things CSS3.
  5. W3C HTML4 vs HTML5 Comparison – this is the constantly updated, definitive source of what’s different between the two specifications.

And while not in my top five, I also have to give an honorable mention to Html5please.us and findmebyip.com. I’ve found that these sites are not as complete as caniuse in terms of the total number of features listed. But, I like them as a double check for browser support.

Holy Grail Resources:

W3C – W3C HTML 5 Specification – The World Wide Web Consortium is the group that writes the standards for HTML.

WHATWG.orgHTML Living Standard This is the technology working group that makes initial recommendations to the W3C.

[Updated broken links: Dec 6, 2016, Apr 5, 2017]

Mozilla + Firefox über-release cycle = #FAIL! The long tail of costs and risks associated with fast release cycles.

Dear Mozilla Foundation, according to your web site you promote openness, innovation and participation. So, I feel strongly enough to write you about a problem. You are pushing for too many major releases in too short of time. If it wasn’t for firebug and httpfox developer tools, I’d dump Firefox as my browser of choice right now. In my humble opinion they are still the best web developer tools around…for now. But back to my point, and note that this isn’t a rash or knee jerk response: Firefox 7 appears to be the least stable browser I’ve used in a long time, Period. I’ve had a dozen lock-ups, problems on startup and various slow page load problems.  

I haven’t added any new plug-ins that might de-stabilize it. In fact, I’ve been using the same set of plug-ins since Firefox 3.x. And, I haven’t had any similar widespread problems with the latest versions of Internet Explorer or Chrome. You might ask “What if it’s just your machine?” To that I say I’ve experienced this on four different machines, and many of my colleagues share the same opinion. So Mozilla, I’m hanging on by my fingertips and you are stomping on them.

The Heart of the Problem

Maybe I’m the exception, but I believe that getting cool new browser features every few months at the expense of stability is the wrong choice. I’ve said this before in another post and I’ll say it again. I’ve read countless articles saying this is what Mozilla has to do to stay competitive. I say you couldn’t be more wrong and that there needs to be a more balanced approach to major releases. Now, I’m not implying that you, Mozilla, are intentionally leaving stability or scalability behind. I’m saying that the massive rush to stay abreast of new features being released by your competitors has to come at a cost, and I believe the cost for Firefox, at this point in time, is stability.

So, in response to an outcry over this and other related problems, and in order to counteract some of the side-effects of your über -release cycle, you will begin offering Extended Support Release, or ESR, sometime in the next year. I interpret this as an attempt to mitigate the über -release cycle’s short-term and long-term risks and costs on Enterprise customers by offering an extending support cycle for a limited number of releases and time. But…readers of this post must read the fine print under the Caveats and Risks sections; for example, ESR’s won’t apply to Firefox Mobile at a time when mobile usage is exploding. And, the ESR makes note of the security risks of staying on an “older” release. Fair enough. However one possible conclusion is, on the surface, ESRs seem like a mere concession to a looming problem, and perhaps it is a stop gap measure at best. Perhaps I’m wrong?

Driving Factors

I want to ask Mozilla the following questions:

  • What’s coming up in your next release?
  • Are the changes really so fundamental that the next release has to be a major numbered version?
  • What metrics are you using to make your decisions?
  • How fast are your users upgrading to new versions world-wide?
  • Is the new version adoption rate trending upward or downward?
  • Who are your largest supporters? Large organizations or the millions of individual users?
  • Have you taken a public survey from your largest supporters of what they would like to see?

Now, of course, this post is just my opinion, and I’m willing to admit that I may be seeing this problem in the wrong light or a different context. But, I and a lot of others want to know what you are thinking.

Hypothetical Scenario

Here’s a hypothetical scenario on how an organization might interpret the ESR, and I speak for myself on this one and am simply presenting one outcome of possibly many. Mozilla will continue to blast along having thirteen more major releases between now and March of 2013*. In response, CIO’s of major organizations will start to choose a pattern of leap frogging across swaths of major releases. In response, their development and IT teams will focus on building web apps, along with a full test suite and certification for Firefox 7. Then their next fully tested and certified release will be targeted at Firefox 9 sometime next year. These organizations may choose to not even support Firefox 8 because it’s between their development and certification cycles. There’s also a long-tail of cost associated with maintaining numerous previous releases across a multitude of browser versions from all the major vendors. In effect, these CIOs will weigh the security risks, costs and other issues over the costs of deploying an army of IT folks and developers to keep up with the über -release cycle.

Concluding Remarks

Mozilla I hope you are listening. You should take the following steps to reduce the possibility of failing as a leading browser vendor:

Focus on stability – IMHO, Firefox appears to be paying the price in the rush to add new and supposedly better features. I’m not even sure what those are because your release process isn’t transparent. As your consumer, first and foremost I would like my browser to be rock solid, followed by speed, followed by snazzy features. Rock solid to me also means that it’s as secure as possible. “Dot” releases are okay and in general they ease the support-related fears from both developers and IT teams.

Slow down the release cycles – Mozilla, you already acknowledged there’s a problem when you proposed the ESRs, but you need to go further than acknowledgement and a pat on the back which is what I consider ESRs to be. Seriously.

Now, I know I didn’t address these above, but I’m throwing these into the mix because I think they are strongly related:

Provide guidance on browser certification and best practices – if documentation for this exists today I can’t find it. Building apps on browsers, today in the year 2012, is still like the Wild West in that everyone does what they think is right, but there’s really no word from the Vendor(s) themselves. Most people point to the W3C. But, everyone agrees that what’s agreed upon in the standard is not what’s officially interpreted and implemented by browser vendors in each and every release.

It’s been speculated by others that having browser vendors offer guidelines would crush innovation, and I strongly disagree. It’s your platform we are building on and you know how to do that in the best possible way. Believe it or not, you are also a key caretaker of the internet in that the web, in its current state, wouldn’t exist without the browser. And, I think it’s your responsibility to step up to the plate and help take leadership role, not just a feature-ship role and simply hope that everything turns out okay.

Provide official tools for browser certification – please don’t leave this entirely up to third parties with different goals and objectives. Based on your vast experience in this process, it would benefit everyone if you were to publicly share your tools, patterns, knowledge and guidance. Or, maybe you already do share this with key partners. Yes, you are open source, but not open process. Browsing through your partner sites doesn’t give any indication of publicly available tools.

I believe that the combination of these four goals would help propell Firefox on more successful trajectory than the one us users see today. Without the inclusion of my last two suggestions, as an application developer I’m hoping my code will work as best as possible, without truly knowing what that means. Is what we are doing simply good enough, or could we all do better? What are your recommendations on patterns for best performance or even unit testing for various languages? I can’t help but believe that you hold the key to that level of knowledge, as well as methodologies and tools that can help Firefox help us deliver on the next generation of web applications.

References

*Mozilla release schedule

Mozilla rapid-release schedule

Mozilla Defends Rapid Release of FireFox Versions (CIO Magazine, August 2011)

Improving Browser performance and stability – will web workers help?

The single-threaded nature of JavaScript is an old tradition that needs to go away. It was great in the wild-west, internet days of the 20th century. But, today we have more complex needs that are being driven by the advancements that are happening around good old JavaScript as we know it, such as…on-going advancements in HTML 5.  

The reason I bring this up is because I’ve been watching the discussion on Web Workers as it has evolved.  It’s a brave attempt to bring a standard for implementing some sanity on this ancient notion of single threading. Now, I do want to say that this post isn’t about debating the merits of web workers, per se. It’s about giving developers better tools on which to build web applications for end users. I’ll be the first to agree that many developers (but not all!), for a variety of reasons, build apps like factories, but without many quality checks.

One argument the pro-single threaded parties claim is that doing away with single-threading will make things even more complicated for the companies that develop browsers and the developers that build apps on them. And, in effect, you’d be giving them (web app developers) free license to create even more terribly built web pages that crash browsers.  For brevity sake, I’m only picking this one out of many possible arguments, as the one that comes up most often in discussions.

I also don’t ever recall seeing a browser vendor themselves saying something like this publicly, but it’s possible.  This is a very weak argument that won’t stand the test of time. Sure, as we build more complex apps then there will be more of both good and bad apps. That’s just the way things work. There’s no way we would ever have a single authority that reviews all web apps before they are published. Perhaps, similar to what Apple does with iPhone apps. Not only would it be impractical, but it certainly seems like it goes against the spirit of the internet and WWW.

I fall into the camp of evolving the tools to better to fit the ever-changing and growing needs of the end users. End users don’t understand the limitations of the browser technology.  They don’t need to and shouldn’t be expected to. All they know is that they want to see ever more visually stunning applications that run well and don’t crash all the time.

Developer tools and technology are much, much more advanced now than when the venerable Mosaic Web Browser hit the scene back in 1993. As an example, all eyes are on HTML 5 (more on that at a later date), and certainly we have the well-known browser plug-ins: Flash and Silverlight, and each has their own development kits. These technologies enable the building of some of the most eye-catching websites, and they really opened people’s eyes on what the web experience should be more like.

Now, I am eyes-wide-open about this. There are some well-documented, but not well understood existing limitations related to the web surfing/development experience as I blogged about here. But, merely saying things should not change because it will become too complicated isn’t a good enough reason to, well…not change.  There are lots of smart people out there that love solving these types of problems.

So, I have a few suggestions of my own for the browser vendors and others to debate and work on. I think web workers are huge step in the right direction. But I also think there’s some other more strategic things that browser vendors could be doing that I think would also help. To me these are just as important as evolving the web standards, perhaps even more so. This is about browser vendors officially providing guidelines for us on how to do our job better:

  • Best Practices Document. All the major vendors should publish web development best practices for HTML and JavaScript development. And, I’m not talking about the W3C standard. That is what’s expect, but not actually what’s implemented. For example, I did a quick search of “web development best practices” using Google and Bing and the very first result I found was a short, not-really-so-helpful article on the Apple web site that was written in 2008!
  • Online HTML/JavaScript Validation engine(s). Each browser vendor should publish their own online HTML/JavaScript validation engine. Or better yet would be if someone builds one site that checks all major browsers in one shot and provides actionable feedback. I’m aware of other types of validators such as this one by W3C for HTML and the like. But, in general right now it’s just a hodgepodge of 3-rd party tools and guesswork as to whether a web app is working right. And, if you are like me and running the web debugger all the time, you’d know how many broken web pages there really are.

References:

The browser as an operating system

Having a basic understanding of how our web applications affect browser performance is the key to determining whether the apps you build will be great, and which apps will be a miserable experience for your users. You can have the worlds’ best looking app with the nicest user-interface ever, but if it runs horribly on most visitors machines or phones then you’ve done your end users a massive disservice.

I contend that the browser as a web application programming environment should be treated as its own operating system with its own well defined dependencies. If you have a basic understanding of how these dependencies work, you’ll be able to build better, more stable, faster applications.

We are constrained in what we can build because browsers provide a finite environment in which to play in. To make things even more fun and challenging, in just the last five years we have gained access to some very powerful tools to build even more complex applications, such as Microsoft’s Silverlight API, and Adobe’s Flex/ActionScript API. Now we can build applications with very rich graphics in days or weeks that would have taken many months or even years before these tools became available. And, web applications until only recently gained the ability to semi-directly interact with the operating system to perform operations such as save or retrieve files from local hard drives. In the ‘dark ages’ we had to bounce files off a proxy server before being able to download them to the local machine. How we interact with the local machine is ultimately controlled by what the browser will allow.

The browser sandbox

Browsers provide us with a well-defined sandbox in which our apps can run, and from a developer’s perspective it includes the following:

  • A JavaScript engine
  • An HTML parser
  • User Interface rendering engine
  • Add-ins/Plug-ins (e.g. Flash Player, Silverlight, etc)
  • Cache space (includes cookies and local stores like in FlashPlayer)
  • Access to the internet
  • Access to local resources

It’s also important note, and if you’ve been building web apps for a while now you’ll know, that browser vendors don’t implement the various proposed standards exactly the same. For example, Internet Explorer may display a certain CSS tag different. Here’s an interesting comparison chart.

What about hardware?

I’m also not saying we don’t need to pay attention to the underlying operating system. In fact, we absolutely do need to pay attention to the following. However, we interact with them only indirectly, and because of that we tend to forget just how important they are:

  • CPU
  • Memory
  • Graphics card
  • Internet connection

Mobile devices are a great example. Mobile devices are getting more powerful all the time, but when they try and chug through a fully decked out web page, it takes them longer than a typical desktop or laptop. I’ve had developers tell me an app that’s running slow on everyone else’s machine was running just fine on theirs. What the developer forgot was he had the latest, greatest, hottest laptop out there with 8 GB’s of memory and an excellent internet connection! By way of example, I posted a screenshot at the bottom of this post of another website that even my quad-core laptop used between 50 – 80% of its resources to  load the page.

Simple Tests

Here are some simple tests to tell if you web app is a good one that will meet the needs of your end users:

  1. How much CPU does it consume? Test it on a moderately configured machine, typical of what your users might have.
  2. How much memory does it use? And, how much memory over time? Browsers can be notoriously leaky, but your program may be contributing to it.
  3. Is everything in the correct locations in the user interface at various common browser sizes? (e.g. 1024×768, 1280×800, etc.)
  4. Does it cause temporary slowdowns or lockups?
  5. Does it crash the browser or browser panel?
  6. Does key functionality and layout work consistently across the major browsers?
  7. Does your app work consistently across all the devices you wish to support, including mobile?

A Few References

Here are a few articles and websites for that are handy to have as references:

http://www.w3.org/  

https://wiki.mozilla.org/Main_Page

http://www.w3.org/TR/CSS2/cover.html#minitoc

http://taligarsiel.com/Projects/howbrowserswork1.htm

http://ejohn.org/blog/how-javascript-timers-work/

http://blog.chromium.org/2008/10/new-approach-to-browser-security-google.html

http://hacks.mozilla.org/2010/05/firefox-4-the-html5-parser-inline-svg-speed-and-more/

CPU Usage loading web site with quad-core laptop
Loading an unnamed website using a quad-core laptop