Yay, I’ll be at OSCON again this year! My presentation is on July 23, 2014 at 5pm in Portland Room 252. For those of you who aren’t familiar with OSCON, it’s one of the largest [if not ‘the’ largest] Open Source conventions in the U.S. Just take a look at the program schedule and you’ll see topics covering just about every open source project or initiative in existence.

I’ve learned a ton every time I’ve attended OSCON and I’m always happy to give back to the community in the form of presenting on lessons learned over the previous year.  In the past I’ve talked about HTML5 Geolocation and Android GPS. This time I’m presenting on best practices for IndexedDB.

If you’ve ever wanted to store large amounts of data in the browser then you’ve most likely read about IndexedDB. It’s a transactional database whereby you retrieve items via a key.  It’s an especially useful tool for taking data offline. While I will spend some time discussing what it is, I’ll spend most of my time on how to best use it. I’ll also examine the fastest way to retrieve data from the database, and look at considerations for pre- and post-processing which is something that is rarely discussed but can dramatically affect application performance.

I hope to see you there!

Tags: , ,
Posted in Conferences | No Comments »

Using async tokens with JavaScript FileReader

The JavaScript FileReader is a very powerful, efficient and asynchronous way to read the binary content of files or Blobs. Because it’s asynchronous, if you are doing high-volume, in-memory processing there is no guarantee as to the order in which reading events are completed. This can be a challenge if you have a requirement to associate some additional unique information with each file or Blob and persist it all the way thru to the end of the process. The good news is there is an easy way to do this using async tokens.

Using an asynchronous token means you can assign a unique Object, Number or String to each FileReader operation. Once you do that, the order in which the results are returned no longer matters. When each read operation completes you can simply retrieve the token uniquely associated with the original file or Blob.  There really isn’t any magic. Here is a snippet of the coding pattern. You can test out a complete example on github.

function parse(blob,token,callback){

    // Always create a new instance of FileReader every time.
    var reader = new FileReader();

    // Attach the token as a property to the FileReader Object.
    reader.token = token;

    reader.onerror = function (event) {
        console.error(new Error(event.target.error.code).stack);

    reader.onloadend = function (evt) {
        if(this.token != undefined){

            // The reader operation is complete.
            // Now we can retrieve the unique token associated
            // with this instance of FileReader.

Note, it is a very bad practice to simply associate the FileReader result object with the token being passed into the parse() function’s closure. Because the results from the onloadend events can be returned in any order, each parsed result could end up being assigned the wrong token. This is an easy mistake to make and it can seriously corrupt your data.

Tags: , , , , ,
Posted in JavaScript | No Comments »

Fastest way to find an item in a JavaScript Array

There are many different ways to find an item in a JavaScript array. With a little bit of testing and tinkering, I found some methodologies were faster than others by close to 200%!

I’ve been doing some performance tweaking on a very CPU intensive JavaScript application and I needed really fast in-memory searching on a temporary array before writing that data to IndexedDB. So I did some testing to decide on an approach with the best search times. My objective was to coax out every last micro-ounce of performance. The tests were completed using a pure JavaScript methodology, and no third party libraries were used, so that I could see exactly what was going on in the code.

I looked at five ways to parse what I’ll call a static Array. This is an array that once it is written you aren’t going to add anything new too it, you simply access its data as needed and when you are done you delete it.

  1. Seek. Create an index Array based exactly on the primary Array. It only contains names or unique ids in the same exact order as the primary. Then search for indexArray.indexOf(“some unique id”) and apply that integer against the primary Array, for example primaryArray[17] to get your result. If this doesn’t make sense take a look at code in my JSFiddle.
  2. Loop. Loop thru every element until I find the matching item, then break out of the loop. This pattern should be the most familiar to everyone.
  3. Filter. Use Array.prototype.filter.
  4. Some. Use Array.prototype.Some.
  5. Object. Create an Object and access it’s key/value pairs directly using an Object pattern such as parsedImage.image1 or parseImage["image1"]. It’s not an Array, per se, but it works with the static access pattern that I need.

I used the Performance Interface to get high precision, sub-millisecond numbers needed for this test. Note, this Interface only works on Chrome 20 and 24+, Firefox 15+ and IE 10. It won’t run on Safari or Chrome on iOS. I bolted in a shim so you can also run these tests on your iPad or iPhone.

My JSFiddle app creates an Array containing many base64 images and then loops thru runs hundreds of tests against it using the five approaches. It performs a random seek on the Array, or Object during each iteration. The offers a better reflection of how the array parse algorithm would work under production conditions. After the loops are finished, it then spits out an average completion time for each approaches.

The results are very interesting in terms of which approach is more efficient. Now, I understand in a typical application you might only loop an Array a few times. In those cases a tenth or even hundredth of a millisecond may not really matter. However if you are doing hundreds or even thousands of manipulations repetitively, then having the most efficient algorithm will start to pay off for your app performance.

Here are some of the test results based on 300 random array seeks against a decent size array that contained 300 elements. It’s actually the same base64 image copied into all 300 elements. You can tweak the JSFiddle and experiment with different size arrays and number of test loops. I saw similar performance between Firefox 29 and Chrome 34 on my MacBook Pro as well as on Windows. Approach #1 SEEK seems to be consistently the fastest on Arrays and Object is by far the fastest of any of the approaches:

OBJECT Average 0.0005933333522989415* (Fastest.~191% less time than LOOP)
SEEK Average 0.0012766665895469487 (181% less time than LOOP)
SOME Average 0.010226666696932321
FILTER Average 0.019943333354603965
LOOP Average 0.02598666658741422 (Slowest)


OBJECT Average 0.0006066666883028423* (Fastest.~191% less time than slowest)
SEEK Average 0.0012900000368244945 (181% less time than LOOP)
SOME Average 0.012076666820018242
FILTER Average 0.020773333349303962
LOOP Average 0.026383333122745777 (Slowest)

As for testing on Android, I used my Android Nexus 4 running 4.4.2. It’s interesting to note that the OBJECT approach was still the fastest, however the LOOP approach (Approach #2) was consistently dead last.

On my iPad 3 Retina using both Safari and Chrome, the OBJECT approach was also the fastest, however the FILTER (Approach #3) seemed to come in dead last.

I wasn’t able to test this on IE 10 at the time I wrote this post and ran out of time.


Some folks have blogged that you should never use Arrays for associative search. I think this depends on exactly what you need to do with the array, for example if you need to do things like slice(), shift() or pop() then sticking to an Array structure will make your life easier. For my requirements where I’m using a static Array pattern, it looks like using the Object pattern has a significant performance advantage. If you do need an actual Array then the SEEK pattern was a close second in terms of speed.


JSFiddle Array Parse tests
Performance Interface

[Updated: May 18, 16:06, fixed incorrect info]

Tags: , , , ,
Posted in JavaScript | 2 Comments »

The one thing that Android needs the most

Android has really missed the boat on one thing that iTunes and iCloud do really well. That is the Android eco-system doesn’t have a built-in, seamless solution for restoring a device from scratch.

There is no universal way to backup and restore Android’s home screen and your phone’s application organization, your application data and settings, photos, videos, messages, ringtones, miscellaneous phone settings, etc.

What this means is it’s a pain and potentially time consuming to rebuild your phone or tablet every time you buy a new Android, your current phone dies because you dropped it, or if you have to switch over to a replacement. The issue is further compounded by the fact that some apps prevent you from saving them to an SDCard. I’m not sure if this is intentional or simply an oversight by the developer when they configured the application for uploading to Google Play.

Third party apps have jumped in to try and fill the void. Many take a really good stab at addressing the issue, but the solutions and their features can be a hodge-podge. Some, such as Titanium Backup, require you root your phone which many people are wary of because it voids any warranties. Others, such as App Backup & Restore, aren’t able to back up the application data and that means all your settings are lost.

I would trade a well-done backup and restore functionality from Android for any new gimmicky feature or pseudo-incremental improvement. Universal back up and restore would be a huge bonus for the entire Android community.

Tags: , ,
Posted in Android | No Comments »

How to tell if a hosting provider is excellent

I spent the previous three weeks fighting a losing battle and wasting hours with my ‘former’ hosting provider. It’s typically quite rare to have advanced-level technical problems on a hosted website. But when advance problems happen you learn really fast whether a hosting provider is worthy of your business or not.

I’ve used quite a few shared and dedicated hosting providers over the years for a variety of reasons both personal and business-related. So I decided to go above and beyond the information you get by simply perusing hosting reviews. Based on my experience, I’ve come with a short list of how to determine if a provider is bad, okay or excellent.

Technical support knowledge and speed. I placed this category first because it is almost always overlooked and it is perhaps the most important factor in getting your site going and maintaing a site once it is up and running. You can test this out by calling them on their toll-free support line or some providers offer chat window services. Here are some things to look for when shopping for a provider:

  • Measure the time it takes for them to answer the phone or get a chat window response during peak business hours. Getting an initial response in less than one minute is excellent. Being on hold for longer than 5 minutes can mean a shortage of trained people in the tech support call center and potentially very long wait times when you need them most.
  • Repeat bullet #1 several times during the day and I’d recommend asking questions during the late night hours as well. For many of us that’s when you are most likely to be tackling personal projects.
  • Ask them several highly technical questions and critically judge the answers you get. You might be surprised at the answers you get. Ask the same questions on a different call with a different support technician and look for consistency. Example questions can include the following. Note you don’t have to ask all of these questions you can pick-n-choose depending on your needs, and this is just a partial list to give you ideas:
    • Does the shared server have PHP (or .NET) already installed? If the support person doesn’t know then move on to the next provider in your list.
    • How do I access my database via myPHPAdmin (or SQL management tools)? If you ever need to fix or compress a database you’ll need access to the database management tools.
    • How can I modify my .htaccess file (Linux)? Or how do I configure my IIS (Windows)? For certain advanced requirements you may have to make tweaks to how your website runs.
    • What is the maximum size allowed for MySQL (or SQL) database? Most blogging software only allows you to use one database at a time. If a provider offers “unlimited databases” that could be a worthless feature for you. In that case the maximum size is important. Your blog may stop working properly if you hit the maximum, and then you may need advanced assistance to fix the problem. Furthermore, if you’re site has a runaway plug-in or it gets spammed you could easily fill up a database and cause it to lock up.
    • How much bandwidth do I get per month and what happens if I go over the limit? For a typical small business or personal blog hosting site, excellent numbers reach or exceed FiOS speeds around 30 – 50Mbits/seconds for both upload and download.
    • What is your procedure for handling Denial of Service (D.O.S.) attacks? One nice thing about shared hosting is it’s in their best interest to assist with most brute force attacks that can happen to almost any website.
    • What are the upload/download speeds on a typical shared host and what is the guaranteed minimum/maximum?
    • Do you auto-update the PHP, WordPress, etc? Many updates these days are for security reasons, not having to worry about it can be a good thing.
    • Do you offer website and database backups for free? You should always, always back up everything.
    • How long do you keep the website and database backups? Some providers only keep backups for three days. This may be okay if always diligently watch your website. Make sure you are comfortable with this. I’ve seen databases get hacked and blown away and by the time the site owner realized it the backups where worthless. It doesn’t happen very often, but it can happen. Some bloggers make it a point to download a copy of their website and database once a month for peace of mind.
    • Do you offer ftp as well as web-based file management? Non-tech savvy bloggers may want to consider web-based file management over the more technical ftp approach.
    • If you can’t get an answer to a specific question and the support tech directs you to email your question then run away as fast as you can. If you have a problem with your site you don’t want to potentially wait 24 hours for an email response via the ‘free’ support option from a provider. If your site goes down or is slow it can affect your SEO ratings.
    • Ask if they charge for advanced or escalated support and if they do charge for advanced support ask for examples of what falls into that category. If you have to give them an example of escalated support, ask about support fixing a corrupted database.
    • Most providers claim 24/7 support. Verify if that the support is free for the entire 24-hour period.

3rd party reviews. Read as many third party hosting reviews as you can and read them carefully. Make sure to check the dates of the reviews. You will find contradictory information, especially in reviews that list providers in a “top 10” style. That’s okay because this information is simply one piece of the puzzle. You still have homework to do.

Hosting provider outages. Do your own uptime research. Most shared hosting providers offer decent uptime numbers such as 99.9%. That still means that your system could be down and offline 43.8 minutes per month. If you are looking at a review site, see if you can find out where they got their uptime numbers.

There are a number of sites that provide basic outage information, such as uptime.netcraft.com, which had current information as of the writing of this post that can give you some insight. Make sure you check to see if there is a date/time stamp on any analysis. Some sites that I reviewed for this blog post hadn’t been updated since 2011!

Money back guarantee. An excellent hosting provider will offer a trail period with a full money back guarantee. You need to read the fine print to see exactly what that means and also make sure the 3rd party reviews agree that there is a guarantee.

Pre-installed software. Investigate if the provider’s pre-installed software meets your needs. If not then also look for “one-click” installs such as WordPress. One-click installs can save you a ton of time. Otherwise, you’ll need to be handy with ftp’ing large files, verifying/settings server permissions and making sure your server has all the required software for a proper install.

Redundancy. You should understand if your server exists at one facility or multiple facilities. Most, but not all, hosting providers copy your entire website across multiple facilities. Obviously, a hosting provider with a single location is more risky and providers with multiple locations should be spread out geographically. If you want international coverage for your website, then you will need to verify if a solution provider offers cloud-based or physical hosting coverage in particular countries.

Test your own download speeds. Once you’ve installed your blog or have your website up and running, make sure you test your website on a variety of internet connections, browsers and devices. And make sure to run your tests at various times of the day and night. Sometimes your site can get CPU or bandwidth squeezed. Keep an eye on these speeds over time. If you have a brand new site, your home page should ideally load in less than three seconds, and if possible less than one second. Get to know your average page load times and keep an eye out for this changing over time. It can be as simple as verifying your blog post every time you post a new one, just load the page and watch the performance numbers in the browser’s developer tools. 

Costs. If hosting costs were your primary decision factor then you probably wouldn’t have read down to the bottom of this blog post. You can get excellent hosting these days for under $4 a month and there is a lot of competition and providers trying to one-up each other. To me, cost is the icing on the cake if all the other important factors meet my requirements. It’s a great time to host a website or blog these days because of the competition and you should be bold about asking a provider if there are any discounts and add-ons they can apply.


These ideas should help steer you towards not just a good hosting solution but an excellent one. I also want to mention that hosting providers will change their policies and practices over time, especially if someone else acquires them. Continue to pay attention to your website. Even small hints can be important indicators that a once excellent provider is slipping up and don’t hesitate to switch if getting help starts to become more difficult or the performance of your website starts to decline. It’s possible, although exceedingly rare, that your existing provider will offer better performance and higher-levels of support for free as time goes on. If you start seeing information that services that were once free are going to cost that might be a warning sign if it’s outside what was initially agreed upon in our contract terms. Lastly, if you get a notice that the hosting “terms of service” has been updated, it’s well worth your time to read (or just glance thru) that document because the changes aren’t often in your favor.

Tags: , ,
Posted in Hosting | No Comments »

Making coding changes and reloading locally hosted web pages over and over again is a pattern familiar web developers world wide. Another familiar pattern is to constantly wonder if your changes are caching in the browser and not being properly reflected in what you are seeing.

Fear not…there is a very easy fix for this and it doesn’t involve using the browsers empty cache options every single time between page reloads. Simply tell your local web server to send the browser a “no-cache” pragma directive in the HTTP header and then you should be good-to-go.

Once you make this change every web page you serve locally will automatically refresh, every single time. Here’s what the W3C has to say about no-cache headers:

 When the no-cache directive is present in a request message, an application SHOULD forward the request toward the origin server even if it has a cached copy of what is being requested. 

Make the change in Apache. Here’s how you make the change in your /etc/apache2/httpd.conf file on the latest Mac OS running 10.8+. Depending on how your machine is set up you can run the command “sudo pico httpd.conf” then enter your admin password and use the short cuts listed at the button of the pico window or use the ‘up’ and ‘down’ buttons on your keyboard to navigate around the file. Typically, the following text is pasted below any other ‘filesMatch’ tags that may reside in the configuration file. Once you are done be sure to restart apache. On Mavericks the command is “sudo apachectl start”:

<filesMatch "\.(html|htm|js|css)$">
    FileETag None
<ifModule mod_headers.c>
    Header unset ETag
    Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
    Header set Pragma "no-cache"
    Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"

Make the change on IIS 7. If you want to make the change on Windows 7, Windows 2008/2008R2 or Vista then here is a link to Microsoft Technet. If you are using IIS Manager, in Step 4 choose the expire immediately options. Or, if you are using the command line copy this line and run it:

    <b>appcmd set config /section:staticContent /clientCache.cacheControlMode:DisableCache</b>

If you have some other operating system version hopefully you get the idea from the suggestions above and apply similar changes for your system.

Optimizing your own public web server cache settings. One last note, the no-cache header setting is typically only used in a development environment. To get the best page performance for your visitors you should allow some browser caching. If you want to learn more about optimizing browser caching here is a good article.


Optimizing Headers (Google)
RFC 2616, Section 14 Header Field Definitions
Configure HTTP Expires Response Header (IIS 7)
Manipulating HTTP headers with htaccess (you can make the same no-cache header change in httpd.conf in Mavericks)

Tags: , , , ,
Posted in Browsers | No Comments »