Smartphone devs, yes SD card speed matters!

If you want to get the highest performance out of your SD cards then read on. The purpose of this article is to raise awareness and spark your curiosity about SD card performance considerations.

Micro SD Class 2

Many developers I talk to aren’t aware that the read/write speeds of SD memory cards can have a significant affect on performance. This is especially true if you are moving around lots of data between a smartphone and the SD card. The good news is there is quite a bit of information out there to help you maximize performance, and a lot of it comes from high-end, camera aficionados believe it or not.

The most common feedback I get is developers typically buy cards with the most capacity at the lowest price. Depending on what you are doing, cheapest and slower isn’t always better. With little bit of research your read/write performance could get significantly better.

To start with there are four common speed classes: 2, 4, 6 and 10 and they represent an approximate minimum performance rating. You can find this number on the front of your card:

  • Class 2 ~ 2 Mbytes/sec
  • Class 4 ~ 4 Mbytes/sec
  • Class 6 ~ 6 Mbytes/sec
  • Class 10 ~ 10 Mbytes/sec

Read/write performance to your phones SD card really depends on HOW your application reads and writes data. You may have to do some testing to find out what works best. It depends on the consideration of multiple factors including:

  • Typical file types (e.g. video vs. text vs. image, etc)
  • Average file or data transaction size
  • Percentage of reads to writes
  • Duty cycle (percentage of reads or writes over a fixed time period)
  • Usage pattern

Usage pattern deserves a bit more attention and really starts to tell the story of what your application does behind-the-scenes. I think the best way to describe it is through some common use cases:

  • Many small reads and writes to/from a local database.
  • Occasional small reads and writes to local database.
  • Occasional large reads from local database.
  • Occasional large reads and writes to/from local database.
  • Large read upon application startup and large write upon application shutdown.

Wikipedia has noted that speed can differ significantly depending on what you are writing to the card. The article notes that writing large files versus writing many small files has widely different affects on performance. I’d seen similar observations when I worked on ultra-high performance server systems. So, the concept still remains today and provides excellent hints on how to tweak every extra millisecond of user experience.

If you need maximum performance then consider reformatting or defragging your card on a regular basis. I know Windows disk defragmenter utilities work on most SD cards, not sure about Mac. I have also seen multiple articles talk about bigger capacity is better because of memory fragmentation. With memory fragmentation, the card speed starts to decrease over time as the data becomes more fragmented. It’s the same concept as when you “defrag” the hard drive on your laptop.

References

If you want to learn more here are some helpful links:

SD Association – Bus speed

SD Association – Speed Class

Wikipedia – Secure Digital (See Speed Class Rating section)

Does your camera need a fast SD card? (good insight into SD card speed)

A closer look at Base64 image performance

This post takes a closer look at Base64 image performance, offers some use cases and raises some questions. I’ve had a number of conversations recently over the benefits of using client-side, Base64 image strings to improve web app performance over making multiple <img> requests. So I ran a few tests of my own, and the results are shown at the bottom of the post.

If you aren’t familiar with them, a Base64 image is a picture file, such as a PNG, whose binary content has been translated into an ASCII String. And, once you have that string then copy-and-paste into your JavaScript code. Here’s an example of what that looks like:

<html>
<body onload="onload()">
	<img id="test"/>
	<script type="text/javascript">
		var html5BadgeBase64 = "iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAYAAACqaXH…";
		function onload(){
			var image = document.getElementById("test");
			image.src = "data:image/png;base64," + html5BadgeBase64;
		}
	</script>
</body>
</html>

There are other ways of doing this that I’m not covering in this post such as using server-side code to convert images on the fly, passing Base64 strings in URLs, etc.

The most commonly cited advantage is that including a Base64 string in your JavaScript code will save you one round-trip HTTP request. There is absolutely no argument on this subject. The real questions in my mind are: what are the optimal number and size for Base64 images? And, is there a way to quantify how much if any they help with performance?

Size? Base64 image strings will always be larger than their native counterparts. Take the following example using a copy of the relatively small HTML5 logo and a decent sized screenshot. As you can see the text equivalent is 33% and 31% greater, respectively.

Html5.png  = 1.27KB (64×64 pixels)
Html5base64.txt = 1.69KB (1738 characters)
% Difference =  +33%

Screenshot.png = 19KB (503×386 pixels 24bit)
ScreenshotBase64.txt = 24.9KB (25584 characters)
% Difference = +31%

In comparison, when working with <img> tags you’ll be working with an ID that points to the actual image file stored in memory.

Convenience? As you can see, the length of your base64 strings can get quite long. The simple HTML5 logo in the previous example becomes a 1738 character long string and that’s only 1.69KBs worth of image.

Can you imagine having a dozen of images similar in size to the 19KB Screenshot example? That would create over 300,000 ASCII characters. Let’s put that into the perspective of a Word document. Using 1” margins all the way around, this would create a document approximately seven and a quarter pages long!

I assert that Base64 is best for static images, ones that don’t change much at all over time. The bigger the image the more time consuming it can become convert it, copy-and-paste it into your code and then test it. Any time you make a change to the image you’ll have to repeat the same steps. If you accidentally inject a typo into a Base64 string you have to reconvert the image. That’s simply the best approach from a productivity perspective.

In comparison when using a regular old PNG file, you create the new version, copy it out on the web server, flush your browser cache, run a quick test with no need to change any code and bang you’re ready to go have a cup of coffee.

Caching? It depends on your header caching settings, browser settings and web server settings. I’ll just say that typically base64 images will be cached either in your main HTML file or in a separate JavaScript library.

Bandwidth? Using base64 images will increase the amount of bandwidth used by your website. Compare the size of your HTML file with Base64 images to the size of the same file simply using <img> tags. You can do some basic math if you add up the size of a particular page and multiple it by the number of visits. Better to err on the side of caution, because there really isn’t a good way to tell which images and JavaScript files are getting catched in your visitors browsers and for how often. Here’s an example where you have a 30GB bandwidth limit per month, and simply converted all of your PNG images to Base64 could very easily push you over the limit:

100,000 page hits/ month (main.html) x 256KB = 25.6 GB (incls. 75KB of standard PNG images)

100,000 page hits/month (main.html) x 293.5KB =  29.4 GB (incls. 97.5KB Base64 images)

Also, some providers give you decent tools that you can use to experiment with Base64 images versus regular images and test that against your bandwidth consumption.

Latency? This variable doesn’t really apply directly to Base64 images. There are many factors that determine latency that I’m not going to discuss here. There are some more advanced networking tools that let you figure out average latency on your own web servers. Every request will be unique based on network speed, the number of hops between the client and the web server, how the HTTP request was routed over the internet, TCP traffic over the various hops, load on the web server, etc.

A few quick performance tests.

What would a Base64 blog post be without a few tests? I devised four simple tests. One in which I referenced a JavaScript file containing Base64 images. One which contained five <img> tags and then I re-ran the tests again to view the cached performance.

These tests were performed on a Chrome Browser over a CenturyLink DSL with a download speed of 9.23MB/sec and an upload speed of 0.68MB/sec. Several tracert(s) of TCP requests from my machine to the web server showed more than 30 hops with no significant delays or reroutes. The web server is a hosted machine.

Test 1 – JavaScript file with Base64 images.This test consists of an uncached basic HTML file that references a 125KB JavaScript library containing five base64 images.

Time to load 1.9KB HTML file: 455ms

Time to load 125KB JavaScript file: 1.14s

Total load time: 1.64s

Test 2 – Cached JavaScript file with Base64 images. This test consists of reloading Test 1 in the browser

Time to load cached HTML file: 293ms

Time to load cached JavaScript file: 132ms

Total load time: 460ms

Test 3 – using <img> tags to request PNG images. This test consists of an uncached HTML file that contains five <img> tags pointing to five remotely hosted 20KB PNG files.

Time to load 1KB HTML file: 304ms:

Time to load five images: 776 ms

Total load time: 1.08s

Test 4 – cached html file using <img> tags to request PNG images. This consists of reloading Test 3 in the browser

Time to load cached HTML file: 281ms

Time to load five cached images: 16ms

Total load time: 297ms

Conclusions

It’s not 100.0000% true that multiple HTTP requests results in slower application performance in comparison to embedding Base64 images. In fact, I’ve seen anecdotal evidence of this before on production apps, so it was fun to do a some quick testing even if my tests were not completely conclusive beyond a doubt.

My goal was to spark conversation and brainstorm on ideas. I know some of you will say that thousands of tests need to be run an statistically analyzed. My argument is that these tests represented actual results that I could see with my own eyes rather than being lumped into some average or medium statistic.

Note that I’m just posting a snapshot of the tests I ran. I didn’t have enough time to draw up a significant battery of tests to cover as many contingencies as possible. However, the test numbers I’ve posted were fairly consistent in the favor of the multiple PNG requests loading faster than a single .js file containing five Base64 images. Obviously more significant testing is needed to sort out other real-world variables, such as image file sizes versus application size and under a variety of conditions and different browsers.

Resources

JavaScript library with five Base64 images

HTML file that reference JavaScript library of five Base64 images

HTML file with five <img> tags

[Edited 2/26/13: fixed a few typos]

Tips for Clearing the Browser Cache: IE, Chrome and Firefox

When doing web development, especially JavaScript/HTML, it’s sometimes hard to tell if your changes loaded when you refreshed the web page, in fact sometimes your changes aren’t reflected. The best thing to do is delete the cache and then reload the page. So, this post will tell you how to that for the three most used browsers – Firefox, IE and Chrome.

Before I tell you how to do it, it’s good to know what the cache does and why. It’s basically a file directory where your browser stores temporary files such as web pages (e.g html files), images (e.g. png’s) and other web-related items including sound files ( e.g. mp3’s). The idea behind storing these files is the user’s experience: it’s faster to retrieve a local file than it is to retrieve it from some remote web site and the page appears to load faster to the user. Another reason is reduces server load for high usage sites because many of the files are loaded locally for repeat visitors. But, even though you may care about this your end users certainly don’t.

Chrome 16.x (Windows)

Go to the top right of the browser and click on the wrench symbol, then Tools > Clear Browsing data. Chrome will then load the Options page and let you choose by timeframe how far back to go when deleting data. When you are doing frequent web development the “past hour” option is awesome. That way you can delete your most recent work and all your other cookies and data will stay in the cache.

 

Firefox 10.x (Windows)

Go to the top left of the browser and click the pull down menu, then History > Clear Recent History. Firefox then loads a popup window that also lets you choose how far back in time to delete data. Firefox was the first browser to offer the “last hour” option. Again, it’s a really nice thing to have when you are doing frequent builds and constantly reloading the browser.

 

Internet Explorer 9 (Windows, of course)

For IE, go to Tools > Internet Options > Delete. Unlike Firefox and Chrome, IE deletes everything and I don’t know of a way to tailor the tool to not do that. This is something to be aware of it IE is your primary browser and you need to blow away the cache. All password cookies and anything else you have stored will be deleted. IE does, however, have a nice feature that I use a lot which is the Delete Browser History on Exit option. Again, if you are doing lots of builds (code then reload page to see changes) then consider checking this option and save yourself a bunch of time clicking through menus every time you reload a page.

 

 

State of the Internet Browser 2012 – consumer browser usage will decrease

Over the next two years I see consumer browser usage decreasing and people will increasingly spend more time using native mobile applications. This has a number of interesting implications.

The facts. As a web application developer I pay close attention to browser and browser-related technology usage statistics and trends. Like most people, I judge statistics based on my own experience and the experience of my co-workers, family and peers.  Here are some trends which I’ve been keeping an eye on:

  • Smartphones are rapidly replacing non-smart phones around the world.
  • The number of specialized smartphone applications is continuing to expand.*
  • The number of games for smartphones continues to grow rapidly.**
  • The amount of time people spend on their smartphone, whether it’s playing games or using specialized applications, is increasing.

Also based on my personal experience are the following additional observations that further tilt the balance in favor of native applications:

  • Performance. Native smartphone applications, when built correctly, almost always outperform web applications: I’m referring to actions such as page refresh, general drawing capabilities and to a lesser degree but still a factor is the look-and-feel. This is a general fact of application technology: compiled applications perform faster than interpreted applications. For the most part, once I’ve used a native application, such as Southwest Airlines check-in app, I loathe having to use their web page. It just seems so clunky and slow in comparison.
  • Games. Ah yes, we can’t forget game performance as well as their look-and-feel. Why would I want a mobile browser-based game? What’s the point of building a high-performance, beautiful user interface game in a browser? See my previous bullet’s comment about compiled application performance. Yes, yes, yes I know that HTML 5 is making big strides, but we are talking mobile applications and the technology as it exist today. You can’t tell your customers that they’ll have to wait another year for better game performance, because by then your favorite browser will have such and such HTML 5 functionality figured out. Your competitors would jump right in, tweak their native app and leave you in the dust!

A Corollary. If you generally agree with my bullets above, the perhaps you’ll agree that the corollary is this trend:

  • Consumers are spending less time on desktop and laptop machines “browsing the web” and more time using their smart phones.

In addition to the reasons I already listed, there are many reasons for this. I suspect the top reasons are because it’s so easy to use your smartphone, and it’s right by your side all the time even when you aren’t home. You most likely have seen people with their heads down playing with their smartphones during business meetings, while eating, while standing in line, while watching TV and even during sports events.

What about the Browser Vendors? These trends have interesting implications for browser vendors. They have to be aware of what’s going on. It’s possible that this is one of the many factors behind their massive push to add HTML 5 capabilities in an attempt to stave off what I’m going to call “user erosion”, as consumers spend less time using web browsers.

But, there are some facts to consider related to building applications that run in the browser:

  • Still functionality problems between different browsers. While the latest generation of browsers are the closest they have ever been to parity, in terms of JavaScript and HTML functionality, web developers are still hacking code to make certain things work equally across all browsers. These “hacks” cost extra time and money to code and maintain and the functionality differences between browsers cause customer frustration when things look different or don’t work as expected. This is especially true in large, retail-type consumer apps were you have little control over what browser your customers choose to use.
  • Faster but fast enough? Today’s browsers have the fastest parsers ever, but it’s a fact that they still aren’t as fast as native code, and they never will be. For the geeks reading this, browsers incur a CPU cost associated with parsing and then executing interpreted code. Smart engineers are going to continue to close the gap, but compiled code will always be faster and more powerful than code running in a browser. Period.
  • Memory usage. Browsers tend to be what we call “leaky”. The longer you use one without restarting it the more memory it will consume. I believe this is less of a problem in mobile browsers where windows get closed a lot more frequently than desktop/laptop browsers. However, it’s still an important consider this in mobile phones where more memory usage equals less battery life. Native apps can definitely leak memory, but they are also starting from a smaller initial footprint, and there are much better tools available for finding native app memory leaks. For browser apps, you also have the browser’s memory usage in addition to your application’s memory usage.
  • Security. Security is getting better for web browsers. But…it’s still easier to build a highly secure native app today than it is to build a secure web app. Also, for better or for worse, I suggest that many consumers perceive native apps to be more secure than web apps. Do you want to do your mobile banking over a web app or a native app? And whether a perception is right or wrong sometimes is irrelevant because it always strongly affects people’s behavior.

Concluding Remarks

Consumer-based companies are going to make important strategic choices based on information similar to what I’ve written above. My guess is that the most successful businesses will be the ones that adapt to what their customers want and if your customers are spending less time “on the web” then you should seriously consider adapting. Just to be clear, I’m definitely not saying that browsers are going away. No one has as crystal ball, and new technology is being created all the time. However, the momentum and sheer size of these trends, with hundreds of millions of people buying and using smart phones worldwide, makes it well worth studying its potential impact on your business.

References:

Mobile Apps Put the Web in Their Rear View Mirror
Mobile Apps vs. the Web – Which is Better For Business?
Gartner Report on Smart Phone Sales in 3rd Quarter 2011

* Companies are building specialized apps that essentially replace the need for customers to visit their web site. However, these apps offer much more control and typically provide a more consistent user experience that the web. Southwest Airlines, for example offers three types of mobile apps in addition to a mobile web site: http://www.southwest.com/html/air/products/mobile.html.

** Books and games, respectively have consistently been the top two categories for the most popular apps, for example: http://www.gottabemobile.com/2011/07/06/ipad-app-store-breakdown-top-apps-categories-chart/

Major public web sites miss the mark on using advanced web technology

As a developer I notice things about web sites that the average person wouldn’t think twice about. I also know that the level of technology that’s available to build user interfaces is well beyond what we had in the early 2000’s. And, the ability to build amazing, user-centric interfaces is as easy as ever. But, I’m always perplexed that most major web sites today incorporate very little of this technology in their full functionality web pages.

So, I’ve been doing an informal survey for the past month and my list includes major news sites, airline web sites and retailers. The vast majority of them aren’t much easier to use than they were five years ago with a great a selection of hyperlinks, tabs and full page refreshes. For the fun of it I decided to travel back in time using an internet time machine call the Wayback Machine and compare some of these sites to their predecessors. I challenge you to do the same.

I have a few suspicions as to why this is happening, or not happening as the case may be. First, plug-in based technology such as Adobe Flex and Microsoft Silverlight typically require some amount of time to load the initial payload into the browser. Sometimes you can create highly optimized or lazy-load packages, but it challenging. It’s rare to find one of these apps that load in the sub-second timeframe required in today’s hyper-competitive environment. The general impression is that the longer your page takes to load, the fewer the visitors you will have. So most major websites code is mostly made up of HTML, JavaScript, jQuery and CSS which most browsers have gotten really, really good at parsing extremely fast.

Second, it’s challenging to build Flex and Silverlight websites so that web crawlers can read text-based content. This seems fairly academic. If you can’t effectively index the content of your site, then potential visitors can’t search it via external search engines such as Bing and Google. Period.

These two items alone may explain why visually spectacular interfaces are limited to small portions of most public websites such as video plug-ins, or just specific sections of a much larger website. Where these more advanced interfaces typically reside are in back office applications where functionality trumps the need for millisecond application load times. There are some very cool exceptions for consumer apps such as the end-user experiences shown Mini Cooper’s build your own car online website. Yet, unfortunately for us as consumers, these are few and far between as consumer companies cater to the vast hunger for ever faster page load times.

The good news for advanced web technology in consumer apps is I’m seeing a large opening with mobile deployments. The plug-in technologies now have the capability to allow you to deliver visually enticing experiences across a wide array of devices. And this can be done, for the most part, without the tedium of worrying about all the vast nuances of different browser types and versions. Plus there is a bonus: the application is manually loaded and ready to go on your device minus the on-device load time when you turn on the app. I’m seeing some really innovative uses of the technology in what I call focused solutions, or applications built for a very specific purpose. Unfortunately most are in commercial beta and I can’t link to them. But, you’ll see them soon in an online marketplace right at your fingertips.

References:

Mobile Development with Adobe Flex 4.5

Silverlight for Windows Phone

Flex.org Showcase

Study: Consumers abandon slow loading websites (April 2010)

Let’s make the web faster (Google, May 2010)

Improving Browser performance and stability – will web workers help?

The single-threaded nature of JavaScript is an old tradition that needs to go away. It was great in the wild-west, internet days of the 20th century. But, today we have more complex needs that are being driven by the advancements that are happening around good old JavaScript as we know it, such as…on-going advancements in HTML 5.  

The reason I bring this up is because I’ve been watching the discussion on Web Workers as it has evolved.  It’s a brave attempt to bring a standard for implementing some sanity on this ancient notion of single threading. Now, I do want to say that this post isn’t about debating the merits of web workers, per se. It’s about giving developers better tools on which to build web applications for end users. I’ll be the first to agree that many developers (but not all!), for a variety of reasons, build apps like factories, but without many quality checks.

One argument the pro-single threaded parties claim is that doing away with single-threading will make things even more complicated for the companies that develop browsers and the developers that build apps on them. And, in effect, you’d be giving them (web app developers) free license to create even more terribly built web pages that crash browsers.  For brevity sake, I’m only picking this one out of many possible arguments, as the one that comes up most often in discussions.

I also don’t ever recall seeing a browser vendor themselves saying something like this publicly, but it’s possible.  This is a very weak argument that won’t stand the test of time. Sure, as we build more complex apps then there will be more of both good and bad apps. That’s just the way things work. There’s no way we would ever have a single authority that reviews all web apps before they are published. Perhaps, similar to what Apple does with iPhone apps. Not only would it be impractical, but it certainly seems like it goes against the spirit of the internet and WWW.

I fall into the camp of evolving the tools to better to fit the ever-changing and growing needs of the end users. End users don’t understand the limitations of the browser technology.  They don’t need to and shouldn’t be expected to. All they know is that they want to see ever more visually stunning applications that run well and don’t crash all the time.

Developer tools and technology are much, much more advanced now than when the venerable Mosaic Web Browser hit the scene back in 1993. As an example, all eyes are on HTML 5 (more on that at a later date), and certainly we have the well-known browser plug-ins: Flash and Silverlight, and each has their own development kits. These technologies enable the building of some of the most eye-catching websites, and they really opened people’s eyes on what the web experience should be more like.

Now, I am eyes-wide-open about this. There are some well-documented, but not well understood existing limitations related to the web surfing/development experience as I blogged about here. But, merely saying things should not change because it will become too complicated isn’t a good enough reason to, well…not change.  There are lots of smart people out there that love solving these types of problems.

So, I have a few suggestions of my own for the browser vendors and others to debate and work on. I think web workers are huge step in the right direction. But I also think there’s some other more strategic things that browser vendors could be doing that I think would also help. To me these are just as important as evolving the web standards, perhaps even more so. This is about browser vendors officially providing guidelines for us on how to do our job better:

  • Best Practices Document. All the major vendors should publish web development best practices for HTML and JavaScript development. And, I’m not talking about the W3C standard. That is what’s expect, but not actually what’s implemented. For example, I did a quick search of “web development best practices” using Google and Bing and the very first result I found was a short, not-really-so-helpful article on the Apple web site that was written in 2008!
  • Online HTML/JavaScript Validation engine(s). Each browser vendor should publish their own online HTML/JavaScript validation engine. Or better yet would be if someone builds one site that checks all major browsers in one shot and provides actionable feedback. I’m aware of other types of validators such as this one by W3C for HTML and the like. But, in general right now it’s just a hodgepodge of 3-rd party tools and guesswork as to whether a web app is working right. And, if you are like me and running the web debugger all the time, you’d know how many broken web pages there really are.

References: