Fastest way to find an item in a JavaScript Array

There are many different ways to find an item in a JavaScript array. With a little bit of testing and tinkering, I found some methodologies were faster than others by close to 200%!

I’ve been doing some performance tweaking on a very CPU intensive JavaScript application and I needed really fast in-memory searching on a temporary array before writing that data to IndexedDB. So I did some testing to decide on an approach with the best search times. My objective was to coax out every last micro-ounce of performance. The tests were completed using a pure JavaScript methodology, and no third party libraries were used, so that I could see exactly what was going on in the code.

I looked at five ways to parse what I’ll call a static Array. This is an array that once it is written you aren’t going to add anything new too it, you simply access its data as needed and when you are done you delete it.

  1. Seek. Create an index Array based exactly on the primary Array. It only contains names or unique ids in the same exact order as the primary. Then search for indexArray.indexOf(“some unique id”) and apply that integer against the primary Array, for example primaryArray[17] to get your result. If this doesn’t make sense take a look at code in my JSFiddle.
  2. Loop. Loop thru every element until I find the matching item, then break out of the loop. This pattern should be the most familiar to everyone.
  3. Filter. Use Array.prototype.filter.
  4. Some. Use Array.prototype.Some.
  5. Object. Create an Object and access it’s key/value pairs directly using an Object pattern such as parsedImage.image1 or parseImage["image1"]. It’s not an Array, per se, but it works with the static access pattern that I need.

I used the Performance Interface to get high precision, sub-millisecond numbers needed for this test. Note, this Interface only works on Chrome 20 and 24+, Firefox 15+ and IE 10. It won’t run on Safari or Chrome on iOS. I bolted in a shim so you can also run these tests on your iPad or iPhone.

My JSFiddle app creates an Array containing many base64 images and then loops thru runs hundreds of tests against it using the five approaches. It performs a random seek on the Array, or Object during each iteration. The offers a better reflection of how the array parse algorithm would work under production conditions. After the loops are finished, it then spits out an average completion time for each approaches.

The results are very interesting in terms of which approach is more efficient. Now, I understand in a typical application you might only loop an Array a few times. In those cases a tenth or even hundredth of a millisecond may not really matter. However if you are doing hundreds or even thousands of manipulations repetitively, then having the most efficient algorithm will start to pay off for your app performance.

Here are some of the test results based on 300 random array seeks against a decent size array that contained 300 elements. It’s actually the same base64 image copied into all 300 elements. You can tweak the JSFiddle and experiment with different size arrays and number of test loops. I saw similar performance between Firefox 29 and Chrome 34 on my MacBook Pro as well as on Windows. Approach #1 SEEK seems to be consistently the fastest on Arrays and Object is by far the fastest of any of the approaches:

OBJECT Average 0.0005933333522989415* (Fastest.~191% less time than LOOP)
SEEK Average 0.0012766665895469487 (181% less time than LOOP)
SOME Average 0.010226666696932321
FILTER Average 0.019943333354603965
LOOP Average 0.02598666658741422 (Slowest)


OBJECT Average 0.0006066666883028423* (Fastest.~191% less time than slowest)
SEEK Average 0.0012900000368244945 (181% less time than LOOP)
SOME Average 0.012076666820018242
FILTER Average 0.020773333349303962
LOOP Average 0.026383333122745777 (Slowest)

As for testing on Android, I used my Android Nexus 4 running 4.4.2. It’s interesting to note that the OBJECT approach was still the fastest, however the LOOP approach (Approach #2) was consistently dead last.

On my iPad 3 Retina using both Safari and Chrome, the OBJECT approach was also the fastest, however the FILTER (Approach #3) seemed to come in dead last.

I wasn’t able to test this on IE 10 at the time I wrote this post and ran out of time.


Some folks have blogged that you should never use Arrays for associative search. I think this depends on exactly what you need to do with the array, for example if you need to do things like slice(), shift() or pop() then sticking to an Array structure will make your life easier. For my requirements where I’m using a static Array pattern, it looks like using the Object pattern has a significant performance advantage. If you do need an actual Array then the SEEK pattern was a close second in terms of speed.


JSFiddle Array Parse tests
Performance Interface

[Updated: May 18, 16:06, fixed incorrect info]

Tags: , , , ,
Posted in JavaScript | 2 Comments »

The one thing that Android needs the most

Android has really missed the boat on one thing that iTunes and iCloud do really well. That is the Android eco-system doesn’t have a built-in, seamless solution for restoring a device from scratch.

There is no universal way to backup and restore Android’s home screen and your phone’s application organization, your application data and settings, photos, videos, messages, ringtones, miscellaneous phone settings, etc.

What this means is it’s a pain and potentially time consuming to rebuild your phone or tablet every time you buy a new Android, your current phone dies because you dropped it, or if you have to switch over to a replacement. The issue is further compounded by the fact that some apps prevent you from saving them to an SDCard. I’m not sure if this is intentional or simply an oversight by the developer when they configured the application for uploading to Google Play.

Third party apps have jumped in to try and fill the void. Many take a really good stab at addressing the issue, but the solutions and their features can be a hodge-podge. Some, such as Titanium Backup, require you root your phone which many people are wary of because it voids any warranties. Others, such as App Backup & Restore, aren’t able to back up the application data and that means all your settings are lost.

I would trade a well-done backup and restore functionality from Android for any new gimmicky feature or pseudo-incremental improvement. Universal back up and restore would be a huge bonus for the entire Android community.

Tags: , ,
Posted in Android | No Comments »

How to tell if a hosting provider is excellent

I spent the previous three weeks fighting a losing battle and wasting hours with my ‘former’ hosting provider. It’s typically quite rare to have advanced-level technical problems on a hosted website. But when advance problems happen you learn really fast whether a hosting provider is worthy of your business or not.

I’ve used quite a few shared and dedicated hosting providers over the years for a variety of reasons both personal and business-related. So I decided to go above and beyond the information you get by simply perusing hosting reviews. Based on my experience, I’ve come with a short list of how to determine if a provider is bad, okay or excellent.

Technical support knowledge and speed. I placed this category first because it is almost always overlooked and it is perhaps the most important factor in getting your site going and maintaing a site once it is up and running. You can test this out by calling them on their toll-free support line or some providers offer chat window services. Here are some things to look for when shopping for a provider:

  • Measure the time it takes for them to answer the phone or get a chat window response during peak business hours. Getting an initial response in less than one minute is excellent. Being on hold for longer than 5 minutes can mean a shortage of trained people in the tech support call center and potentially very long wait times when you need them most.
  • Repeat bullet #1 several times during the day and I’d recommend asking questions during the late night hours as well. For many of us that’s when you are most likely to be tackling personal projects.
  • Ask them several highly technical questions and critically judge the answers you get. You might be surprised at the answers you get. Ask the same questions on a different call with a different support technician and look for consistency. Example questions can include the following. Note you don’t have to ask all of these questions you can pick-n-choose depending on your needs, and this is just a partial list to give you ideas:
    • Does the shared server have PHP (or .NET) already installed? If the support person doesn’t know then move on to the next provider in your list.
    • How do I access my database via myPHPAdmin (or SQL management tools)? If you ever need to fix or compress a database you’ll need access to the database management tools.
    • How can I modify my .htaccess file (Linux)? Or how do I configure my IIS (Windows)? For certain advanced requirements you may have to make tweaks to how your website runs.
    • What is the maximum size allowed for MySQL (or SQL) database? Most blogging software only allows you to use one database at a time. If a provider offers “unlimited databases” that could be a worthless feature for you. In that case the maximum size is important. Your blog may stop working properly if you hit the maximum, and then you may need advanced assistance to fix the problem. Furthermore, if you’re site has a runaway plug-in or it gets spammed you could easily fill up a database and cause it to lock up.
    • How much bandwidth do I get per month and what happens if I go over the limit? For a typical small business or personal blog hosting site, excellent numbers reach or exceed FiOS speeds around 30 – 50Mbits/seconds for both upload and download.
    • What is your procedure for handling Denial of Service (D.O.S.) attacks? One nice thing about shared hosting is it’s in their best interest to assist with most brute force attacks that can happen to almost any website.
    • What are the upload/download speeds on a typical shared host and what is the guaranteed minimum/maximum?
    • Do you auto-update the PHP, WordPress, etc? Many updates these days are for security reasons, not having to worry about it can be a good thing.
    • Do you offer website and database backups for free? You should always, always back up everything.
    • How long do you keep the website and database backups? Some providers only keep backups for three days. This may be okay if always diligently watch your website. Make sure you are comfortable with this. I’ve seen databases get hacked and blown away and by the time the site owner realized it the backups where worthless. It doesn’t happen very often, but it can happen. Some bloggers make it a point to download a copy of their website and database once a month for peace of mind.
    • Do you offer ftp as well as web-based file management? Non-tech savvy bloggers may want to consider web-based file management over the more technical ftp approach.
    • If you can’t get an answer to a specific question and the support tech directs you to email your question then run away as fast as you can. If you have a problem with your site you don’t want to potentially wait 24 hours for an email response via the ‘free’ support option from a provider. If your site goes down or is slow it can affect your SEO ratings.
    • Ask if they charge for advanced or escalated support and if they do charge for advanced support ask for examples of what falls into that category. If you have to give them an example of escalated support, ask about support fixing a corrupted database.
    • Most providers claim 24/7 support. Verify if that the support is free for the entire 24-hour period.

3rd party reviews. Read as many third party hosting reviews as you can and read them carefully. Make sure to check the dates of the reviews. You will find contradictory information, especially in reviews that list providers in a “top 10” style. That’s okay because this information is simply one piece of the puzzle. You still have homework to do.

Hosting provider outages. Do your own uptime research. Most shared hosting providers offer decent uptime numbers such as 99.9%. That still means that your system could be down and offline 43.8 minutes per month. If you are looking at a review site, see if you can find out where they got their uptime numbers.

There are a number of sites that provide basic outage information, such as, which had current information as of the writing of this post that can give you some insight. Make sure you check to see if there is a date/time stamp on any analysis. Some sites that I reviewed for this blog post hadn’t been updated since 2011!

Money back guarantee. An excellent hosting provider will offer a trail period with a full money back guarantee. You need to read the fine print to see exactly what that means and also make sure the 3rd party reviews agree that there is a guarantee.

Pre-installed software. Investigate if the provider’s pre-installed software meets your needs. If not then also look for “one-click” installs such as WordPress. One-click installs can save you a ton of time. Otherwise, you’ll need to be handy with ftp’ing large files, verifying/settings server permissions and making sure your server has all the required software for a proper install.

Redundancy. You should understand if your server exists at one facility or multiple facilities. Most, but not all, hosting providers copy your entire website across multiple facilities. Obviously, a hosting provider with a single location is more risky and providers with multiple locations should be spread out geographically. If you want international coverage for your website, then you will need to verify if a solution provider offers cloud-based or physical hosting coverage in particular countries.

Test your own download speeds. Once you’ve installed your blog or have your website up and running, make sure you test your website on a variety of internet connections, browsers and devices. And make sure to run your tests at various times of the day and night. Sometimes your site can get CPU or bandwidth squeezed. Keep an eye on these speeds over time. If you have a brand new site, your home page should ideally load in less than three seconds, and if possible less than one second. Get to know your average page load times and keep an eye out for this changing over time. It can be as simple as verifying your blog post every time you post a new one, just load the page and watch the performance numbers in the browser’s developer tools. 

Costs. If hosting costs were your primary decision factor then you probably wouldn’t have read down to the bottom of this blog post. You can get excellent hosting these days for under $4 a month and there is a lot of competition and providers trying to one-up each other. To me, cost is the icing on the cake if all the other important factors meet my requirements. It’s a great time to host a website or blog these days because of the competition and you should be bold about asking a provider if there are any discounts and add-ons they can apply.


These ideas should help steer you towards not just a good hosting solution but an excellent one. I also want to mention that hosting providers will change their policies and practices over time, especially if someone else acquires them. Continue to pay attention to your website. Even small hints can be important indicators that a once excellent provider is slipping up and don’t hesitate to switch if getting help starts to become more difficult or the performance of your website starts to decline. It’s possible, although exceedingly rare, that your existing provider will offer better performance and higher-levels of support for free as time goes on. If you start seeing information that services that were once free are going to cost that might be a warning sign if it’s outside what was initially agreed upon in our contract terms. Lastly, if you get a notice that the hosting “terms of service” has been updated, it’s well worth your time to read (or just glance thru) that document because the changes aren’t often in your favor.

Tags: , ,
Posted in Hosting | No Comments »

Making coding changes and reloading locally hosted web pages over and over again is a pattern familiar web developers world wide. Another familiar pattern is to constantly wonder if your changes are caching in the browser and not being properly reflected in what you are seeing.

Fear not…there is a very easy fix for this and it doesn’t involve using the browsers empty cache options every single time between page reloads. Simply tell your local web server to send the browser a “no-cache” pragma directive in the HTTP header and then you should be good-to-go.

Once you make this change every web page you serve locally will automatically refresh, every single time. Here’s what the W3C has to say about no-cache headers:

 When the no-cache directive is present in a request message, an application SHOULD forward the request toward the origin server even if it has a cached copy of what is being requested. 

Make the change in Apache. Here’s how you make the change in your /etc/apache2/httpd.conf file on the latest Mac OS running 10.8+. Depending on how your machine is set up you can run the command “sudo pico httpd.conf” then enter your admin password and use the short cuts listed at the button of the pico window or use the ‘up’ and ‘down’ buttons on your keyboard to navigate around the file. Typically, the following text is pasted below any other ‘filesMatch’ tags that may reside in the configuration file. Once you are done be sure to restart apache. On Mavericks the command is “sudo apachectl start”:

<filesMatch "\.(html|htm|js|css)$">
    FileETag None
<ifModule mod_headers.c>
    Header unset ETag
    Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
    Header set Pragma "no-cache"
    Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"

Make the change on IIS 7. If you want to make the change on Windows 7, Windows 2008/2008R2 or Vista then here is a link to Microsoft Technet. If you are using IIS Manager, in Step 4 choose the expire immediately options. Or, if you are using the command line copy this line and run it:

    <b>appcmd set config /section:staticContent /clientCache.cacheControlMode:DisableCache</b>

If you have some other operating system version hopefully you get the idea from the suggestions above and apply similar changes for your system.

Optimizing your own public web server cache settings. One last note, the no-cache header setting is typically only used in a development environment. To get the best page performance for your visitors you should allow some browser caching. If you want to learn more about optimizing browser caching here is a good article.


Optimizing Headers (Google)
RFC 2616, Section 14 Header Field Definitions
Configure HTTP Expires Response Header (IIS 7)
Manipulating HTTP headers with htaccess (you can make the same no-cache header change in httpd.conf in Mavericks)

Tags: , , , ,
Posted in Browsers | No Comments »

Debunking the myth of the one second web page load rule

Ever since Google announced that every mobile web site had to achieve less than one second loading times, I’ve been meaning to do a fun, psuedo-scientific study to help start the discussion of putting this well intentioned goal into a perspective that everyday developers and businesses can understand. This topic has been heavily promoted, with most of the blog posts simply trumpeting agreement and either implying or coming right and saying that any page that loads over one second is pretty much worthless. In case you’ve never read the actual article, here’s a qoute from Google:

“we must deliver and render the above the fold (ATF) content in under one second, which allows the user to begin interacting with the page as soon as possible.”

I propose that most consumers will put up with a majority of sites that don’t deliver in exactly one second as long as the following general criteria are met: the user finds what they want or need the majority of the time, the user interface is fairly easy-to-use and intuitive, that there are limited or no overbearing advertisement banners (whoops!), and that the page loads within a reasonable amount of time. I really haven’t done enough research to know what a ‘reasonable amount of time’ is, but I know from testing that very few sites today deliver true sub-second performance. So let’s tear into this a bit.

First don’t get me wrong, I think sub-second performance is a very worthy goal and I absolutely think every web developer worth his or her salt should strive as hard as they can to make their web page performance as fast as possible. Reality is that budgets can be limited, time frames for deliverables can be short and not everyone is a website performance expert. However…I think we need to start asking some hard questions to make sense of the one-second rule and understand how it can be applied in our everyday development work rather than simply taking it at face value, for example:

  • What industries in particular does the rule apply? I suspect it’s mostly applied to the online retail industry. I think web site visitors cut other industries a reasonable amount of slack.
  • Does the rule only apply to first time visitors?
  • Do repeat visitors abide by other page speed rules? Repeat visitors can take advantage of cached browser pages to speed up their viewing experience
  • How do lousy mobile internet connections factor into this equation? For example, if someone knows they have a lousy internet connection most of the time do they factor that into their online buying decisions?
  • Does the rule apply to all countries or just the U.S.?
  • What is the ideal internet connection speed that this rule is based on? It seems unlikely that a page would have to load under one second regardless of the connection speed.
  • Does this apply to only self-hosted websites? What if your website is hosted on Amazon Webstore or Etsy because with these sites you don’t really have any control over how their webservers, DNS, cloud or internet pipe are configured.

I then went about verifying who are the largest online retailers in the U.S. by sheer sales volume and I came up with Amazon, Staples, Apple and Walmart as good candidates for the top four. However you verify this list, we can all agree that these four sites generate a massive amount of internet traffic, billions of dollars in revenue per year and perhaps even a majority share of internet sales. Given the fact that these stores are were tens of millions of people successfully shop every day I wanted to use the seemingly undisputable shopping experience of these retailers as a basis for comparison.

It seems like a fair assumption that these retailers must be doing something right, and therefore whatever they are doing could be a potential guideline for others. I theorize people’s online shopping and surfing expectations are formed by the websites on which they shop the most. You tend to do in-store shopping at places where you are comfortable and that the same can be said for on-line shopping. Therefore, we need to understand these leading retailers performance baselines to get some basic numbers that we can compare our against our own websites performance.

The Device

For my device, I used my middle of the pack Nexus 4 on a DSL WiFi to ensure the best possible consistent connection. Where I live 4G speeds can fluctuate quite a bit during the day, so in order to normalize those issues out of the tests I simply went with WiFi:

Android Nexus 4, Android v4.4.2
Native Chrome browser
12 Mb/sec DSL/WiFi via G Band (verified between 10 – 12 Mb/sec) – Your own WiFi experience will vary significantly.

To measure performance, I used the latest desktop version of Chrome Canary and it’s new mobile inspection tools that were hooked up to my phone via USB cable. This works really, really well, by the way.

The Criteria

Here is what I was looking for. Your test results will vary based on your device, other applications running and internet connection speeds. I didn’t test iTunes because, well, I don’t use iTunes on my Android and it’s not a website. Believe it or not, when I went to on my Android I got a desktop website and not a mobile website.

I chose the following criteria to put context around the very first page load, since that’s what Google seems to focus on the most. My goal was to load each page two times. First time is with an empty browser cache and second time is with the website cached in the browser. Then I repeat the tests multiple times to help account for any anomalies.

Here’s the criteria I looked  at:

  • Page lag with the browser cache empty. This represents a first time page visitor. By the way, I’m making a distinction between the technical time that Chrome Devtools reports the page has loaded and when the various parts and pieces within a web page finish spinning up first. This may result in a slight delay until you can actually start navigating around. This is a very subject number and it’s really hard to eyeball it accurately, but we’ve all experienced it. A web page can ‘appear’ to look like it has loaded however when you go to scroll the page down nothing may happen for a short period of time. I fully acknowledge that some of my perceived delays are due to the lag time looking back and forth between a timer and the web page which were right next to each other.
  • Page lag with web page cached. I report both the technical page load time and the perceived page load time. This represents a repeat visitor.
  • Total download time with no cache. This is the time in which all HTML, JavaScript and CSS has finished loading. As mentioned in bullet 1 this represents the number reported by the developer tools. Lazily loaded content can continue to happen unbeknownst the to user and drag down page performance for the first time page visitor.
  • Total download time with cache. Lazily loaded content can also drag down page performance for repeat visitors.
  • And last but not least, Google’s PageSpeed Insights online tool gives a few guidelines for trying to examine how well a website page stacks up against specific criteria. My only sticking point is it’s not 100% clear what criteria is being used. But, I will point out that not a single top four website got excellent ratings in the ‘speed’ category. In fact, if we were giving out grades, two of them were in the ‘C’ category and the other two were in the ‘F’ category. Tests (Averaged)

  1. Page lag no cache – 1.18 seconds reported, however based on my perception it looked more like between 2 and 3 seconds as the page finished visually loading.
  2. Page lag cached – 1.53 seconds according to Chrome dev tools. Strangely, the cached page tests seemed just a hair slower than the non-cached page. I’ve noticed that browsers can sometimes be a bit slow when grabbing cached files. It would take more research to dig into how they construct their page and what cached settings are used.
  3. Total download time (no cache) 36.52 seconds, 819KB, 103 requests (Yes, that’s right…around 35 – 36 seconds for a full and complete page load)
  4. Total download time (cached) 4.59 seconds, 428KB, 78 requests
  5. PageSpeed Insights
    1. Speed – 71/100
    2. User Experience – 99/100 Tests (Averaged)

  1. Page lag no cache – 415ms reported, based on my perception it looked more like between 1 and 3 seconds as the page finished visually loading. There was a somewhat brief spinner icon that displayed as the page loaded. Fast!
  2. Page lag cached – 378ms actual, based on my perception it looked more like between 1 and 2 seconds. I was able to start scrollable immediately.
  3. Total download time (no cache) 10.32 seconds, 358KB, 41 requests
  4. Total download time (cached) 9.61 seconds, 47KB, 30 requests
  5. PageSpeed Insights
    1. Speed – 77/100 (This number surprised me, but again we don’t know how the number was calculated)
    2. User Experience – 99/100 Tests (Averaged)

  1. Page lag no cache – 5.52 seconds reported, and that approximately matched what I could see.
  2. Page lag cached – 4.45 actual and that also matched what I could see.
  3. Total download time (no cache) 8.25 seconds, 572KB, 44 requests
  4. Total download time (cached) 7.04 seconds, 25KB, 39 requests. Wow, 7 seconds for 25KBs??
  5. PageSpeed Insights
    1. Speed – 50/100 (Yikes!)
    2. User Experience – 96/100 Tests (Averaged)

Apple gets the worse grade of the group because when I surfed to I got a full blown desktop website instead of a mobile-enabled website. PageSpeed Insights apparently agreed with me.

  1. Page lag no cache – 2.41 seconds reported, and my eyeballing it said between 2 and 4 seconds.
  2. Page lag cached – 1.69 seconds according to Chrome dev tools. My eyeballing it tended to look like around 2 seconds.
  3. Total download time (no cache) 3.66 seconds, 2.8MBs, 72 requests.
  4. Total download time (cached) 2.46 seconds, 905KB, 71 requests.
  5. PageSpeed Insights
    1. Speed – 58/100 (Yikes!)
    2. User Experience – 60/100 (Double Yikes!)


Since the vast majority of internet users buy products from these major retails, I believe their overall perceptions of how a web site should perform is in a great part established by their experiences while buying products online from them. None of the top sites were perfect, and there is always room for continued improvements.

Only one website out of the top four Internet retailers delivered a technical page load speed that was under one second: Amazon came really, really close. Staples had mediocre mobile performance. Apple didn’t offer my Android phone a mobile-enabled website.

There is a difference between the times when the page is loaded in the browser as reported by the developer tools and when all web page components become completely visible and then a short time later, ranging from several hundred to several thousand milliseconds, fully usable. As a mobile web developer I can tell you it takes a bit of time for a mobile application to be 100% ready. Many (most?) of us have experienced the often herky-jerky surfing experience as a web page bounces up and down while the content is still loading in the background. iPads have this nasty habit if you aren’t patient enough to wait and wait until the page “appears” to have finished loading. Because of this experience, defining technically at what point the page becomes fully usable can be fairly subjective. This is especially true because some retails treat tablets like desktop machines and deliver a full blown version of a website. Testing for when the page becomes visible and usable is very dependent on the phone’s capabilities, any other applications that might be using the phones hardware and bandwidth resources, as well as the internet connection at that point in time and the users perceptions!

Repeat page visits almost always load faster. Web developers would already know this, but it’s important to keep in mind for making sense of page performance discussions: first time visitors will get a different experience than repeat visitors who come back frequentyly. There is all sort of magic that can be done to control and tweak page caching.


Mobile Analysis in PageSpeed Insights
Mobile Path to Purchase: Five Key Findings (interesting info on how people use mobile for retail)
Amazon’s sales versus others (WSJ)
Top 5 largest online retailers (

Tags: , ,
Posted in Performance | No Comments »

4 reasons user interface workflows are important

This blog post is for teams that are looking to build new applications, or are rebuilding existing systems from the ground-up. User interface workflows are the steps someone needs to take in order to complete a single task. These are no different from any other system you may have for doing things in your daily life. We all have systems. Sometimes we implement our own systems without even thinking about it because we they can make our lives intuitively easier.

Some common examples of systems that you might use are making coffee every morning and then drinking it on the way to work. Or, taking the same route to and from work every day. Eating dinner at the same place every Friday night. Maybe you have a system for creating passwords. The list goes on. We often also refer to these as rituals. It’s rare, or perhaps even unheard of, for us to devise a ritual for ourselves that causes frustration or anger. Eventually a ritual, or system, can become a habit and then you don’t even think about doing it anymore…it just happens.

User interfaces deserve the same love and care as any self-imposed ritual or system you’ve ever devised. We’ve all experienced bad user interfaces: things are hard to find, or finishing a task isn’t intuitive, or the application breaks. A good user interface follows a natural progression and it simply just flows along with a minimal number of instances where you simply “get stuck.”

The 4 Reasons

Note, I’ve vastly simplified these reasons and I could certainly write a lot more on each topic. So rather than blathering on and making this an academic-grade paper, I think these speak for themselves and can hopefully spark constructive conversation within your team or organization.

Time and money. Intuitive workflows are enjoyable to use and almost second nature. If a workflow is non-intuitive, it can cost your organization time and money as users struggle to complete tasks. Long term technical support costs can be a direct reflection of how easy or hard your application is to use.

Attracting/retaining customer. If you own an online retail site, where seconds matter to web visitors with the sub-second attention spans, having a well-designed site can actually help attract and retain customers. New customers appreciate when a system is easy-to-use. Systems that take to long to learn can cause customers to leave and never come back.

Training costs. Training costs can be lower for well-designed systems, especially if the concepts are easier to grasp. More complex systems, of course, have a longer learning curve and involve more in-depth training and training follow-up.

Modification resistance. There is typically some resistance to change and change is inevitable for most applications as time goes on.  Re-training costs are almost always taking into account when systems are modified in any way. If a system has been well perceived to begin with then small changes may not be a big deal. However, if a system has a perception of being very complex then there will be an expectation that any new additions will also be very complex. It may be fair to say within an organization that any resistance to change increases along with perceived complexity of the application.

What are some examples of simple workflows?

I believe travel web sites have nailed down their workflows pretty well and we can learn a lot of from them. Competition is fierce in the travel industry and they live and die on generating high sales volume. Sites such as Hotwire, Travelocity and others have to get you the information you need as fast and as easy as possible or visitors simply go elsewhere. Because of this pressure, they have to deliver exactly what you are looking whether it’s a simple round trip plane ticket, or if you need lodging, rental cars, an entire travel package or more.

How hard is it to build a simple user interface?

In many software circles there are often discussions proposing it’s far easier to quickly start coding an interface and to simply get the project going than it is to take the time and design a really good interface beforehand.  If you’ve ever worked with a user interface designer you’ll have had this concept strongly debunked and new concepts drilled into your brain. Understand and agree on the design first with all interested stake holders…well before anyone types a single line of code.

It can seem counter-intuitive at first to hear about spending time at the whiteboard drawing story boards and diagrams rather than simply sitting at your desk slamming down lines of code and pushing out prototype after prototype. I can guarantee one thing: It is far faster, easier and significantly less expensive to change an initial user interface storyboard in something like Basalmiq than it is to re-write user interface code over and over again as different groups try to agree on the best workflow.

Tags: , ,
Posted in UX | No Comments »