Archive for the ‘Browsers’ Category

In Part 1 we looked at the differences between partial and fully offline use cases. Part 2 provides an overview of the HTML5 Interfaces and JavaScript APIs that make it possible to go offline with web applications. Going offline involves working with multiple pieces and coding for specific patterns. I’ve tried my best to stick to technology that is widely available across the largest variety of browsers.

Offline dependencies

Offline web applications are dependent on three things.  It doesn’t matter if your application is partially or fully offline, you’ll still need to address these in your code.

  • Caching HTML, CSS and JavaScript
  • Data Storage
  • Offline/Online detection

Caching

Application Cache. The Application Cache, or AppCache, interface lets you specify and store HTML and CSS files as well as JavaScript libraries so that they are available from the browser’s native cache. Once an item is in the cache the browser will use it regardless of whether it’s online or offline. It’s almost like you never went offline!

The AppCache is an essential part of your application strategy for allowing offline browser reloads or restarts. Without this an application will simply fail to re-load while offline.

Data Storage

Browsers have a variety built-in JavaScript APIs for storing data. The data can be for maintaining the applications state such as for storing bookmarks and form data. Or, it can be used for storing information such as maps, address and phone lists, TO-DOs or points of interest for a vacation.

LocalStorage. The LocalStorage API is super-easy to use. It stores Strings in simple key/value pairs. It’s limited to about 5MBs on most browsers. The two main challenges you’ll run into with LocalStorage are hitting the storage limit and performance hits when serializing and deserializing data.

IndexedDB. IndexedDB is essentially an asynchronous noSQL database that lets you store a wide variety of datatypes so that you don’t have to deal with serialization/deserialization.  Datatypes include String, Object, Array, Blob, ArrayBuffer, Uint8Array and File. While many online sources will tell you that there isn’t a size limit, I’ll tell you that in general you should limit your storage on a mobile device around 50 – 100MBs to help prevent the browser from crashing.

WebSql. It’s widely recommended that you not build applications directly on WebSql. The World Wide Web Consortium (W3C) is letting this standard die off in favor of IndexedDB and LocalStorage. I’m really only including this here for reasons such as Safari 7 and and the Android native browsers before 4.4 only support WebSql. For more information on how to get around this read down to the section on IndexedDBShim.

3rd Party Browser Storage

If the built-in browser storage capabilities aren’t meeting your needs you still have other options.

IndexedDBShim. IndexedDBshim is a Polyfill for WebSQL-based browsers. Because IndexedDB isn’t natively supported on older versions Safari 7 and Opera you can use this 3rd party shim to transparently translate your IndexedDB code to work across Android and iOS.

PouchDB. PouchDB is an Open Source experimental library that is an attempt to smooth some of IndexedDB’s rough edges as well as provide additional functionality, such as the ability to sync with remote stores.

LocalForage (Mozilla).  LocalForage is also an attempt to bridge the gap between LocalStorage and IndexedDB. It gives you an interface that provides much wider browser coverage than IndexedDB by itself.  One of the downsides is the amount of storage you can use. If a user is on an older browser such as IE8 that’s limited to LocalStorage then that user will be limited to storing about 5MBs of data. If you requirements call for using more than that, such as downloading large address lists, then the app won’t work on that browser or you’ll have to build in some sort of paging mechanism that deletes the old data and brings in the new.

Offline/Online Detection

There are a number of ways to detect if the browser is online or offline as well as when the internet status changes.

NavigatorOnline.online.  Some browsers have a built-in detection mechanism. However, it is not always reliable and false positives are a distinct possibility. For that reason, you will have to build additional detection capabilities or lean towards a 3rd party library.

Offline.js. Offline.js is a small Open Source library (~3KB) that detects when you lose an internet connection and when it comes back up. While not perfect, it does handle a lot of cross-browser compatibility issues for you. And, if you find bugs you can always create a fix and submit pull requests.

References

Caniseuse – IndexedDB

Caniuse – LocalStorage

Caniuse – WebSQL

Let’s Take This Offline

Tags: , , , , , , ,
Posted in Browsers, JavaScript, Mobile | Comments Off

5 ways that passwords get compromised

Over the last month as I’ve been doing normal updating of passwords I came across several major public company websites that gave me the following error:

The password you entered is invalid. Please enter between 8 and 20 characters or numbers only.

Anyone that knows anything about “Passwords 101” knows if you go with a minimum of 8 characters and/or numbers it could problematic if someone hacked into the password database. Security experts say that excellent password security includes alphabetical, numeric as well as non-alpha-numeric characters. A password of 8 characters and/or number could be hacked in micro-seconds.

This nicely dovetails with a number of conversations that I’ve had about this recently, and there is so much speculation about password security that I felt compelled to list the potential security holes for passwords. There is very little that you can do beyond minding your own passwords strength. The rest is up the companies and organizations that host your data. Here’s my list of the most common ways that passwords can get compromised.

Inadequate passwords – I suppose it makes sense to start off my list with this topic. But first I have a few important words about password strength. Simple passwords, such as those containing a limited amount of numbers and letters, for example “Test1234″, can be cracked in milliseconds on a typical laptop. When criminals get ahold of a username/password list the first they do is called a dictionary attack in which they try to compromise the easiest to break passwords first.

Unencrypted passwords stored on database – Not encrypting passwords in some way is the same thing as leaving the keys in the ignition of your car with the door unlocked. This is like ice cream to anyone that has access to the database, legally or illegally. They can simply download the ready-to-use user names and passwords.  It doesn’t matter how sophisticated of a password you have if it’s simply unencrypted. There is no way for the user to know if the passwords are encrypted or not, it’s completely up to the IT department that controls the database.

Phishing virus – This virus can provide usernames and passwords for a targeted organization. The intent of this virus is to trick someone into entering their username and password on a fake website that mimics another website, such as an email logon screen. Once someone enters their credentials, they are immediately available to the criminals. The best prevention against phishing viruses is to not open any suspicious attachments and to keep your virus software up to date. There is not a 100% cure against getting a phishing virus since any attachment, even from legitimate sources, could be compromised. But, a good start is not opening ones from someone you don’t know or one that has a suspicious sounding name. Trust your instincts.

Keystroke loggers – This is a spy program that can be installed on any computer. They can be very hard to detect and they do exactly as advertised: they log everything you do on a computer and then they typically relay that information to somewhere else on the internet where your data can examined. The best protection against keystroke loggers, and viruses as well, is a multi-faceted approach: anti-virus programs, spyware sweepers and software firewalls. In addition to that, occasionally viewing which programs are running on your computer and researching any program names you don’t recognize.

Network Sniffers – Sniffer software can monitor all internet traffic over a network.  Network sniffers can easily compromise public WiFi. The person who collects the sniffer data can sift thru the digital traffic information and then siphon out different types of login requests. Once they have the login information, if it’s encrypted they can run cracker programs against the encrypted information. As an internet surfer there is something you can to to help prevent getting your username and password siphoned off: purchase a consumer Virtual Private Network (VPN) product and always use it, especially if you are on an open WiFi at some place like Starbucks or an Airport.

Conclusion

Is there such thing as a truly secure password? Definitely not. This is especially true if the database server containing the password has less-then-adequate security measures in place to protect it from unauthorized intrusion.

Is there anything you can do to protect yourself? Absolutely. My minimum recommendation is to get a password manager program. They can create strong, unique passwords for every website that you need. Some password managers also let you securely share passwords between phone, tablet and laptop. Search for the words “password manager” to find out more. Here’s an article you can peruse as a starting point: pcmag.com. You should also keep your anti-virus software up-to-date and regularly run spyware scanners. Some folks go even one step further and install a software firewall that lets you control all communications to and from your computer. Last, but not least you can use VPN software when surfing the internet.

So what did I do about the websites mentioned above with poor security? In one case I wrote the CEO of the company and I also switched to using the maximum number of characters and numbers, which in the case mentioned above was 20. That website didn’t really store anything vital. In other case, I dropped the website like a lead balloon. If their password security is less-than-optimal I couldn’t help but question and wonder about the rest of their digital information security practices.

Tags: , ,
Posted in Browsers, Security | Comments Off

Deleting an HTML Application Cache

When you are testing web applications that use an Application Cache, also sometimes called the manifest file, you have to delete this file every time you make a change to the application. If you don’t then none of the changes you make to the application will show up. The very purpose of the Application Cache is to semi-permanently store your HTML, CSS, JavaScript and images. It’s becoming increasingly popular for speeding up web app performance, and a requirement for taking web apps offline. In fact, Google now uses an application cache for their home page.

Simply trying to delete your browser cache in the normal way won’t necessarily clear the Application Cache and its associated files. So here’s a quick rundown that will hopefully save you some time.

Chrome – browse to chrome://appcache-internals/.  There may be a number of different caches listed. Select ‘Remove’ for any cache that you want to go bye-bye.

Chrome (Mobile Android) – go to Settings > Privacy (under Advanced) > CLEAR BROWSING DATA, checkbox the ‘Clear the cache’ option and then select the ‘Clear’ button.

IE 10 – go to Tools > Internet Options > Settings > Caches and databases tab. Select the cache that you want to delete and the click the ‘Delete’ button.

Safari (Mobile) – For Safari iPhone and iPad go to Settings and select “Clear Cookies and Data.”

Safari (Desktop) – Simply attempting Develop > Empty Caches may not work. On a Mac you may have to: close your browser, manually delete the .db file by going to //library/Caches/com.Apple.Safari and move any item ending in .db to the trash, then restart browser. If this doesn’t work then try restarting your machine. Yep, it’s an awful workflow and it’s been a known bug in Safari dating back to at least version 6.

Firefox (Desktop) – go to Tools > Options > Advanced > Network > Offline data > Clear Now.

Want to learn more about Application Cache’s? Here’s a good technical overview from WHATWG describing what is an application cache. And, MDN has a good article on Using the application cache.

Tags: , , , ,
Posted in Browsers | Comments Off

Making coding changes and reloading locally hosted web pages over and over again is a pattern familiar web developers world wide. Another familiar pattern is to constantly wonder if your changes are caching in the browser and not being properly reflected in what you are seeing.

Fear not…there is a very easy fix for this and it doesn’t involve using the browsers empty cache options every single time between page reloads. Simply tell your local web server to send the browser a “no-cache” pragma directive in the HTTP header and then you should be good-to-go.

Once you make this change every web page you serve locally will automatically refresh, every single time. Here’s what the W3C has to say about no-cache headers:

 When the no-cache directive is present in a request message, an application SHOULD forward the request toward the origin server even if it has a cached copy of what is being requested. 

Make the change in Apache. Here’s how you make the change in your /etc/apache2/httpd.conf file on the latest Mac OS running 10.8+. Depending on how your machine is set up you can run the command “sudo pico httpd.conf” then enter your admin password and use the short cuts listed at the button of the pico window or use the ‘up’ and ‘down’ buttons on your keyboard to navigate around the file. Typically, the following text is pasted below any other ‘filesMatch’ tags that may reside in the configuration file. Once you are done be sure to restart apache. On Mavericks the command is “sudo apachectl start”:

<filesMatch "\.(html|htm|js|css)$">
    FileETag None
<ifModule mod_headers.c>
    Header unset ETag
    Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
    Header set Pragma "no-cache"
    Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"
</ifModule>
</filesMatch>

Make the change on IIS 7. If you want to make the change on Windows 7, Windows 2008/2008R2 or Vista then here is a link to Microsoft Technet. If you are using IIS Manager, in Step 4 choose the expire immediately options. Or, if you are using the command line copy this line and run it:

    <b>appcmd set config /section:staticContent /clientCache.cacheControlMode:DisableCache</b>

If you have some other operating system version hopefully you get the idea from the suggestions above and apply similar changes for your system.

Optimizing your own public web server cache settings. One last note, the no-cache header setting is typically only used in a development environment. To get the best page performance for your visitors you should allow some browser caching. If you want to learn more about optimizing browser caching here is a good article.

References

Optimizing Headers (Google)
RFC 2616, Section 14 Header Field Definitions
Configure HTTP Expires Response Header (IIS 7)
Manipulating HTTP headers with htaccess (you can make the same no-cache header change in httpd.conf in Mavericks)

Tags: , , , ,
Posted in Browsers | Comments Off

This post is about web applications designed for online-only usage that for reasons beyond your control will occasional go offline, or appear to have connection problems to non-techy end users. Even though we expect it, connectivity is not guaranteed. The good news: there are many things that you can control to help improve the usability of your sites and the perception of its uptime.

The Internet is inherently unreliable and it goes up and down as well as faster and slower all the time. It’s even more unreliable if you are talking about mobile web as compared to being plugged into a dedicated Ethernet or WiFi connection. Failures can happen within the app, on the Internet connection and even at the web server or CDN and when it happens it can frustrate users and eventually turn them into unhappy customers. The challenge for you as web developers and IT managers: it’s often hard for the people managing websites to get a real good look at the end-user experience because it can be so hard to duplicate.

In general most users typically blame their “internet connection” which is a euphemism for it’s the cellphone providers fault or the DSL or cable company’s fault. And, most people don’t know or really care where the problem is, they just want it fixed.  A common reflex when there is a problem is for a user to simply reload the entire page. In some cases, a full page reload isn’t possible or it’s painful such as more complex sites where a full reload means potentially walking about through several steps to get to back to the final page or view.

So here are a few suggestions to you, as a web developer, to help minimize occasional disruptions and keep users as happy as possible. Some of these are major repeats but they are well worth seeing yet again:

Performance. Make your web pages as lightweight as possible. Pages that load faster will ‘appear’ to be more responsive to requests even if you aren’t concerned about millisecond response times. Most of you will have already had this drilled into your head over and over: The goal should be fewer and smaller files, using CDNs, moving CSS and JavaScript library loading operations to the bottom of your html pages, use inline images and the list goes on and on. There are many articles on the web about improving performance. Search for ‘website performance’ to find out more. Another example, Steve Souder has an excellent website and even written books on the subject.

Caching. Consider page cache settings carefully. The subject of setting header caches, such as ETags, Expires and Last-Modified headers, is often overlooked and usually misunderstood. Cached content cuts down on the total number of HTTP requests when someone loads your web page. Static content, or content that doesn’t change much, usually has longer cache times than content that changes frequently.  Even though there are many articles on the web about caching, doing it well can be tricky. It can be very handy to hire an expert to figure out optimal configurations in a short period of time. Or in my case, I spent several months of experimenting while subjecting my blog readers to unnecessary page lag, and variety of other problems, until I finally broke down and hired an expert.

HTTP requests that block. Be aware of any HTTP operations that block the loading or use of your pages.  If you have to use a blocking HTTP request then make sure you set a timeout in the client request, such as 20 seconds and display some sort of a loading icon. A good web designer can help walk you through the UI experience. Most modern web servers have server-based timeouts that are longer than most people are willing to wait.

Auto-retry. Alternatively, consider a significantly shorter HTTP timeout setting and retry the connection several times before failing and notifying the user that the app couldn’t connect. These days a single 404 error doesn’t necessarily mean the website is down. But…very, very few websites employ this pattern. So what happens in response is most people reflexively keep hitting reload when there are any loading problems. Reloading an entire page is much more bandwidth intensive on your servers as compared to having the app simply retrying quietly and quickly in the background to load a specific item.

More efficient database polling. Long running database queries can give the impression that the connection is broken. If you have requirements to poll a server-side database for changes, consider implementing a server-based process that simply returns a JSON-based Boolean such as {changes: “false”} if there are no changes. In comparison, most server-side database requests typically run entire and potentially complex SQL queries with every internet request to tell you nothing changed.  From a server resource preservation viewpoint, it’s significantly less overhead to return a simple JSON-based Boolean and let a long-running server side process do all the heavy lifting on a regular timer cycle.

Fail gracefully.  Don’t hang an entire page if your app fails to load a JavaScript library or some other content throws a 404 error, or if a database request fails. Don’t do it. I know this seems obvious, but I see it all the time when doing my daily web surfing. See my suggestions above for handling HTTP requests. Most major companies seem to be guilty of this for activities such as viewing billing pages.  Let the end user know through some sort of a pop-up that a connection has failed or timed out. Native mobile applications have built in mechanisms for doing this, and granted they can auto-detect when the Internet connection goes down, but I still believe regular web apps should mimic the behavior when possible.

ApplicationCache. Consider storing some pages and resources for when a connection goes down by using the HTML5 ApplicationCache interface. This lets you go beyond the typical caching mechanisms using patterns that can be easier to understand and control as compared to the somewhat black box and variable nature of header settings.

Feedback. The ability to email web administrators directly has lost favor over the last five years or so. I suggest bringing this back in a big way, along with clearly posted links. Sometimes the best way to know something is down or slow is to hear it directly and immediately from a customer. Yeh sure, you’ll get some spam email but if it means keeping customers happy then there are both automated and manual ways to deal with it that work. I can speak personally on this topic as my blog has received over 40,000 spam attempts of which I’ve personally deleted over 3,000, and I’m just a team of one. Some techy sites do provide a “Performance” section of their forums, which is fine as long as employees are actually monitoring it (often). The problem with forums is notification of new posts…and, of course, is usually done via email.

Uptime Monitors. Use uptime monitors from different spots around the country you live in, or around the world if you are using a worldwide CDN. Some providers can do this for you, but you should ask questions. The most common scenario I’ve seen is that the update monitor lives in the same server farm as the web server. This is okay but it doesn’t cover the scenario of connectivity outside your firewall. Uptime monitors should not just ping a website, they should also attempt to load and parse actual content, throw a warning email or text message if the content throws an error and throw a warning if a connection takes too long. There are many reasons why you may think your website is up and it’s not. For example, a CDN node could be down, a CDN server could have the wrong permissions, a major Internet router could be down, or your support folks could be using an internal pathway to view pages on your web server that is no longer visible to the outside world. These types of monitors don’t cost much to operate and can significantly boost customer service ratings and help keep customers happy.

Browser Support. Last but not least and probably the touchiest subject is browser support. My recommendation is if you don’t support a particular browser type, then give the end user a message that says some functionality may not work properly. We’ve all been to sites on our tablets or phones, for example, and popups didn’t work right or things didn’t display properly. Non-tech -savvy end users can easily misunderstand these types of things since it rightly gives the appearance that something is broken. If a popup didn’t work it may appear that a sale did not complete, for example. It’s very easy these days to use libraries for browser detection. Doing browser detection should always be part of a web app deployment plan.

Resources

HTTP Caching Protocols (W3C)

What is a CDN?

Beginners Guide to ApplicationCache

Browser support – Caniuse.com

Tags: , , ,
Posted in Browsers, Internet | Comments Off

Where Part 1 focused on non-GPS enabled devices, Part 2 is totally focused on mobile web geolocation. The great news is that the usage of HTML5 location services along-side the fact that there is a GPS chipset in most, if not all, modern smartphones and tablets dramatically improves the chances of getting an accurate location. And, besides that fact — mobile geolocation is simply a lot of fun to work with.

I also want to point out that there are an increasing number of really good blog posts covering the topic of “how to use” the API that look at the nitty-gritty of how the code works. This post is different in that I’ve tried to focus on “how to build successful applications” with the API, and how to get the most out of the API so that you can successfully implement your unique requirements.

What’s different about desktop vs. mobile HTML5 Geolocation? With mobile you can access the GPS if it’s available. It’s important to note that in order to access a device GPS you have to set the optional enableHighAccuracy property in your code. Contrary to what is shown in some samples on the internet, you can use this property with both the getCurrentPosition() and watchPosition() functions.

//One time snapshot
navigator.geolocation.getCurrentPosition(
     processGeolocation,
     // Optional settings below
     geolocationError,
     {
         timeout: 0,
         enableHighAccuracy: true,
         maximumAge: Infinity
     }
);

//Tracking users position
watchId = navigator.geolocation.watchPosition(
     processGeolocation,
     // Optional settings below
     geolocationError,
     {
         timeout: 0,
         enableHighAccuracy: true,
         maximumAge: Infinity
     }
);

How accurate is it??? This is the million dollar question, right? When using enableHighAccuracy() on a phone where all the appropriate permissions have been selected and granted, I’ve typically seen accuracy readings as low as 3 meters (~10 feet) that were obtained within 10 – 30 seconds of kicking off the geolocation functionality. I’d consider that excellent for most consumer and retail applications. You should be aware that like any location-based functionality you will get spurious (abnormal) results that fall way outside the norm, and sometimes these results are wildly wrong.

I’ve seen claims that using the enableHighAccuracy() property slows down the phones ability to deliver a location. I’m going to argue that those claims are misleading. It is true that the GPS, itself, can take a significant amount of time to warm up and start delivering high accuracy results. For an in-depth look at that topic see my post on the Six Most Common Use Cases for Android GPS. However, there are conditions where simply enabling the enableHighAccuracy() property doesn’t affect the speed in which you can get the initial result. More on these topics below.

What is the best way to try out various configuration scenarios? I’ve built an HTML5 Geolocation Testing tool that can be used in browser, or it can be repurposed to work in PhoneGap or Titanium. It is a jQuery-based mobile application that includes a map and settings view were you can adjust all the different properties and try out different configuration scenarios. It’s a work-in-progress so I welcome suggestions and pull requests.

 Why HTML5 Geolocation rather than native? Applications using HTML5 Geolocation typically have slightly different requirements than native GPS-based applications. Each platform has its advantages and disadvantages and it all comes down to your requirements, budget, timeframes and skill sets:

  • Ability to re-use existing JavaScript and HTML5 skills to build a high-accuracy mobile application.
  • Don’t have access to native platform developers or skillsets on Android, iPhone and/or Windows Phone.
  • Need a cross-platform stand-alone web app, or a web app that has been repurposed to work with PhoneGap or Titanium.
  • Quickly locate the user/consumer within a reasonable expectation of accuracy.
  • Typically it is a non-commercial, consumer grade application that does not have extremely high accuracy requirements (e.g. < 1 meter).

How fast can I get an initial location result? The answer is very fast, potentially within a few seconds, given the following scenarios:

  • If there was a cached GPS or Network location stored on the phone. The GPS location is, of course, from the GPS chipset. The Network location comes from your wireless carrier and is dependent on your phone and their capabilities.
  • How the timeout and maximumAge properties are set. If you set timeout = 0 and maximumAge = Infinity it will force the application to grab any cached location, if one is available. Other settings may result in delays.
  • If the phone or tablet has decent internet connectivity and Wifi enabled.
  • If the device is in an urban area with many wifi nodes broadcasting their SSIDs nearby.
  • The device has a clear and uninterrupted view of the sky. GPS’s listen for a very weak signal from multiple satellites. These signals can be partially or completely blocked by buildings, thick foliage, vehicle roofs, etc.

 How accurate is the initial location result? Hah, you might have guessed I’d say that it depends.  When you first kick off a geolocation request, accuracy does depend on a number of different factors that are mentioned above. And it’s safe to say that, in the vast majority of cases, the first location is not the most accurate and typically not the most dependable. If you want the fastest, most accurate location possible then you will most likely need to either do multiple snapshots, or use watchLocation until your desired level of accuracy is met. It’s important to note because I’ve been asked about this many times, you cannot expect the GPS, itself, to have enough time to lock onto a satellite and deliver a fast, accurate initial location. It may take dozens of seconds or even minutes. Yep, it’s true. Factors that affect initial location accuracy include:

  • Cached locations – how recently the user accessed location functionality. For example, applications like Facebook typically grab a location when you open the app. So frequent users of social media are more likely to have a fresh, cached location that non-social media users. If you are targeting business travelers, the cached location might the last city before they got on a plane. Or, it could be your home neighborhood and not where you work or go to games.
  • Wifi turned “on”. If the Wifi is turned on then the device can access the location service and there is a much greater chance that the initial result is fairly accurate. If you didn’t have a chance to read Part 1, when the Wifi is on your browser gathers local Wifi node information from your Wifi card, and it can use that information in a location service provider request over the internet to try and triangulate your position. Typically this means your initial location can be within a block or two of the actual position. Also, it is possible if Wifi is turned on that you can get a significantly more accurate initial location than if you were using GPS by itself with no Wifi or internet.
  • Internet connectivity strength. If you have a poor internet connection and no Wifi, then the browser’s requests to the location service can be delayed, blocked or even interrupted.
  • No VPN. Take note commercial application developers: as mentioned in Part 1, if VPN software is in use it can wildly affect accuracy and even place you in another State (or Country).

Can I use HTML5 Geolocation for mobile tracking? Yes, with caveats. Typically HTML5 tracking applications are built inside a native wrapper framework such as PhoneGap or Titanium. There are several immediate problems with stand-alone, browser-only HTML5 tracking applications. First, there is no built-in functionality to keep the screen from going to sleep. Second, when the screen goes to sleep the HTML5 Geolocation functionality also goes to sleep. Native-based tracking applications can work around these limitations and listen passively in the background when they are minimized. Third, you have little control over the GPS settings to help management battery consumption.

Can I use HTML5 Geolocation offline? Yes! If there is no cellular connection or Wifi available, then HTML5 Geolocation can still access cached locations and real-time GPS information. This is vastly different from what was discussed in Part 1 as related to applications targeted at laptops, desktops and tablets that may or may not have GPS. If a device does not have a built-in or externally available GPS then your offline application will not work.

Handling abnormal location results. Your application will occasionally encounter widely inaccurate results and you need to handle these gracefully for the best user experience possible. My recommendation is to check the timestamps and distance traveled between the current geolocation object and the previous one. If the distance or speed seems excessive then you’ll need to reject the result. In the reference section below is a link to more information on calculating the distance between two points containing latitude and longitude. As an example, see the attached screenshot with the spurious results indicated by red circles. Also note in the screenshot the accuracy level was 3 meters, so it’s important to understand that even at high accuracy levels you still need to very that each location meets your minimum requirements. This way your results will always look polished and professional to the end user.

Spurious results

What are some of the downsides of using HTML5 Geolocation versus native? The bottom line is that for simple location gathering and basic tracking HTML5 Geolocation is just fine. This should meet the requirements for most consumer applications. For anything more complex than that you should consider looking at going native.

  • It may not work on older phones and older browsers (depending on your definition of old). See below in the references section for a link to a fallback library to handle these situations.
  • HMTL5 Geolocation offers significantly less control over GPS settings. This can have an unacceptable impact on more complex applications.  Because of this, I also suggest that HTML5 Geolocation is not suitable for long-running tracking applications.
  • Battery life management. This is a direct result of bullet #2. It’s more challenging to manage battery life with HTML5 Geolocatoin if your requirements call for continuous use of the GPS.  Your control is very limited with respect to these two properties: timeout and maximumAge.
  • Cannot use it when the application is minimized. If your requirements calls for the ability to passively receive locations while in a minimized state then, as mentioned earlier, you will have to go native.
  • Very little control over how often you want location updates. You’ll need to do a bunch of custom coding to emulate what is already built into native application APIs. For example, the native Android API offers very detailed control over what type of geolocation data you can get access to, how you can access it and how often. Read more on that topic in my post on How Accurate is Android GPS Part 1 – Understanding Location Data and also take a look at Android’s LocationManager Class.

References

W3C Geolocation API Specification 

HTML5 Geolocation Test Tool

Mozilla – Using Geolocation

Calculating distance between two points.

Geolocation fallback library for older browsers

Tags: , , , , , , , ,
Posted in Browsers, HTML5, Mobile | 19 Comments »