A closer look at Base64 image performance

This post takes a closer look at Base64 image performance, offers some use cases and raises some questions. I’ve had a number of conversations recently over the benefits of using client-side, Base64 image strings to improve web app performance over making multiple <img> requests. So I ran a few tests of my own, and the results are shown at the bottom of the post.

If you aren’t familiar with them, a Base64 image is a picture file, such as a PNG, whose binary content has been translated into an ASCII String. And, once you have that string then copy-and-paste into your JavaScript code. Here’s an example of what that looks like:

<html>
<body onload="onload()">
	<img id="test"/>
	<script type="text/javascript">
		var html5BadgeBase64 = "iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAYAAACqaXH…";
		function onload(){
			var image = document.getElementById("test");
			image.src = "data:image/png;base64," + html5BadgeBase64;
		}
	</script>
</body>
</html>

There are other ways of doing this that I’m not covering in this post such as using server-side code to convert images on the fly, passing Base64 strings in URLs, etc.

The most commonly cited advantage is that including a Base64 string in your JavaScript code will save you one round-trip HTTP request. There is absolutely no argument on this subject. The real questions in my mind are: what are the optimal number and size for Base64 images? And, is there a way to quantify how much if any they help with performance?

Size? Base64 image strings will always be larger than their native counterparts. Take the following example using a copy of the relatively small HTML5 logo and a decent sized screenshot. As you can see the text equivalent is 33% and 31% greater, respectively.

Html5.png  = 1.27KB (64×64 pixels)
Html5base64.txt = 1.69KB (1738 characters)
% Difference =  +33%

Screenshot.png = 19KB (503×386 pixels 24bit)
ScreenshotBase64.txt = 24.9KB (25584 characters)
% Difference = +31%

In comparison, when working with <img> tags you’ll be working with an ID that points to the actual image file stored in memory.

Convenience? As you can see, the length of your base64 strings can get quite long. The simple HTML5 logo in the previous example becomes a 1738 character long string and that’s only 1.69KBs worth of image.

Can you imagine having a dozen of images similar in size to the 19KB Screenshot example? That would create over 300,000 ASCII characters. Let’s put that into the perspective of a Word document. Using 1” margins all the way around, this would create a document approximately seven and a quarter pages long!

I assert that Base64 is best for static images, ones that don’t change much at all over time. The bigger the image the more time consuming it can become convert it, copy-and-paste it into your code and then test it. Any time you make a change to the image you’ll have to repeat the same steps. If you accidentally inject a typo into a Base64 string you have to reconvert the image. That’s simply the best approach from a productivity perspective.

In comparison when using a regular old PNG file, you create the new version, copy it out on the web server, flush your browser cache, run a quick test with no need to change any code and bang you’re ready to go have a cup of coffee.

Caching? It depends on your header caching settings, browser settings and web server settings. I’ll just say that typically base64 images will be cached either in your main HTML file or in a separate JavaScript library.

Bandwidth? Using base64 images will increase the amount of bandwidth used by your website. Compare the size of your HTML file with Base64 images to the size of the same file simply using <img> tags. You can do some basic math if you add up the size of a particular page and multiple it by the number of visits. Better to err on the side of caution, because there really isn’t a good way to tell which images and JavaScript files are getting catched in your visitors browsers and for how often. Here’s an example where you have a 30GB bandwidth limit per month, and simply converted all of your PNG images to Base64 could very easily push you over the limit:

100,000 page hits/ month (main.html) x 256KB = 25.6 GB (incls. 75KB of standard PNG images)

100,000 page hits/month (main.html) x 293.5KB =  29.4 GB (incls. 97.5KB Base64 images)

Also, some providers give you decent tools that you can use to experiment with Base64 images versus regular images and test that against your bandwidth consumption.

Latency? This variable doesn’t really apply directly to Base64 images. There are many factors that determine latency that I’m not going to discuss here. There are some more advanced networking tools that let you figure out average latency on your own web servers. Every request will be unique based on network speed, the number of hops between the client and the web server, how the HTTP request was routed over the internet, TCP traffic over the various hops, load on the web server, etc.

A few quick performance tests.

What would a Base64 blog post be without a few tests? I devised four simple tests. One in which I referenced a JavaScript file containing Base64 images. One which contained five <img> tags and then I re-ran the tests again to view the cached performance.

These tests were performed on a Chrome Browser over a CenturyLink DSL with a download speed of 9.23MB/sec and an upload speed of 0.68MB/sec. Several tracert(s) of TCP requests from my machine to the web server showed more than 30 hops with no significant delays or reroutes. The web server is a hosted machine.

Test 1 – JavaScript file with Base64 images.This test consists of an uncached basic HTML file that references a 125KB JavaScript library containing five base64 images.

Time to load 1.9KB HTML file: 455ms

Time to load 125KB JavaScript file: 1.14s

Total load time: 1.64s

Test 2 – Cached JavaScript file with Base64 images. This test consists of reloading Test 1 in the browser

Time to load cached HTML file: 293ms

Time to load cached JavaScript file: 132ms

Total load time: 460ms

Test 3 – using <img> tags to request PNG images. This test consists of an uncached HTML file that contains five <img> tags pointing to five remotely hosted 20KB PNG files.

Time to load 1KB HTML file: 304ms:

Time to load five images: 776 ms

Total load time: 1.08s

Test 4 – cached html file using <img> tags to request PNG images. This consists of reloading Test 3 in the browser

Time to load cached HTML file: 281ms

Time to load five cached images: 16ms

Total load time: 297ms

Conclusions

It’s not 100.0000% true that multiple HTTP requests results in slower application performance in comparison to embedding Base64 images. In fact, I’ve seen anecdotal evidence of this before on production apps, so it was fun to do a some quick testing even if my tests were not completely conclusive beyond a doubt.

My goal was to spark conversation and brainstorm on ideas. I know some of you will say that thousands of tests need to be run an statistically analyzed. My argument is that these tests represented actual results that I could see with my own eyes rather than being lumped into some average or medium statistic.

Note that I’m just posting a snapshot of the tests I ran. I didn’t have enough time to draw up a significant battery of tests to cover as many contingencies as possible. However, the test numbers I’ve posted were fairly consistent in the favor of the multiple PNG requests loading faster than a single .js file containing five Base64 images. Obviously more significant testing is needed to sort out other real-world variables, such as image file sizes versus application size and under a variety of conditions and different browsers.

Resources

JavaScript library with five Base64 images

HTML file that reference JavaScript library of five Base64 images

HTML file with five <img> tags

[Edited 2/26/13: fixed a few typos]

Going mobile Part 3 – This blog is now mobile enabled!

The Page Not Found Blog is now mobile enabled! Please send me your feedback. After months of research and several months of fiddling around the mobile template is now live. You should only see the template if you are using a mobile device.

As I’ve mentioned before, I’ve taken the approach of using two different templates for a single website. One is specifically for desktop browsers and one is specifically for mobile. It’s all an experiment anyway, so whatever happens is a learning experience.

The current mobile template settings are fairly bare bones, but I did that on purpose to speed up load times. I’ll tweak the look and feel as time goes on.

There are also still a few rough edges related to my legacy content that I personally noticed. For example, some images are displayed in odd places when viewed on mobile browsers. But, so far the WordPress tags seems to be displaying just fine, which was a major concern for things such as code snippets. I’ve tested the site using an Android v4.x device and an iPad 3.

References

Going Mobile with Blogging

Going Mobile Part 2 – Dealing with legacy content

Dispelling Responsive App Testing Myths

There’s been a lot of buzz about Firefox’s Responsive Design View and to a lesser extent Chrome’s User Agent swapper. These built-in, ready-to-go tools are awesome and huge time savers for web developers. But, they are NOT a substitute for testing your app across multiple browsers and on different devices.

A highly simplified definition of responsive design is it lets your web app dynamically adjust content based on screen height, width and pixel density. There are over a dozen responsive frameworks including Twitter Bootstrap, 960 Grid, jQuery and Titan. The number of them has exploded in recent years and it’s a daunting task to figure out which ones meet your project’s needs. So, the intent of this blog post is to provide some super simple, quick guidelines for getting started with testing your apps against them.

What does Firefox’s Responsive Design View do? It lets you change the dimensions of your application on-the-fly and even rotate it by simply clicking a button. Very cool! It lets you pretend to see what your app will look like on different screen sizes. In the latest versions of Firefox, you can access this view via Tools > Web Developer > Responsive Design View.  See the screenshot for an example of what the tool looks like.

What does Responsive Design View not do? It does not emulate the actual functionality of browsers on different sized devices. So, simply swapping screen sizes on your Windows desktop Firefox browser is not a valid substitute for physically testing your app on a tablet or smartphone. Firefox offers a separate download for Firefox on Android if you want to test on a mobile device.

What does a user-agent do? The user-agent is intended to identify a browser to a webserver or web application. It is sent by the browser in the HTTP request header. The agent can be parsed and used to determine if any specific actions need to be taken within an application. So, if you are using Chrome you can spoof using Internet Explorer 9. Here’s an example of a user-agent for Firefox running on Windows 7 64-bit:

Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20100101 Firefox/16.0

The most common use of user-agents on webservers is for creating site visit statistics that provide a generalized break-down of site visitors by browser type.

The most common use of user-agents in web applications is for determine how content is displayed and how JavaScript may be used in that browser. If you have some JavaScript that you know won’t work in IE, then you want to try and do your best to detect specific versions of IE to control your content.

Why spoof the user-agent? Simply put, it tricks an application into being displayed based on the user-agent characteristics. The key word is “display”. The user-agent spoof does not emulate actual browser functionality. It only shows you what the app looks like if it contains browser-specific code by tricking it into thinking it’s running on a specific browser. However, to properly test full browser functionality you need an actual install of the browser on a specific device and operating system.

Is user-agent detection reliable? In general, no. User-agent detection within your code is not reliable. Because user-agents can be easily spoofed these days, it is no longer a single reliable source for determining how your application will display content. Even jQuery, for example, recommends against relying on user-agents as a sole means of determining browser type.

For JavaScript, it’s usually best to check to see what is supported by a particular JavaScript library, for example in jQuery.

$("p").html("This frame uses the W3C box model: <span>" +
jQuery.support.boxModel + "</span>");

Or, if you are building larger applications consider a special detection library like Modernizr that helps with HTML, CSS and JavaScript coding patterns.

What about CSS3 Media Queries? CSS3 Media Queries are considered to be an integral part of responsive design coding patterns. They let you use CSS to detect certain characteristics of a browser in order to adjust the styling of an application by doing things such as swapping visibility of HTML elements or even adding or removing DIVs from the layout. Using Firefox’s Responsive Design tool or Chrome’s user-agent swapper can be used to test these elements with a healthy dose of caution. You still need to do your final testing for browser-specific functionality on the respective browsers themselves. Here is an example of a media query looking for maximum and minimum screen width:

@media screen and (min-device-width:768px) and (max-device-width:1024px) {
    /* styles go here */
}

You can also run media queries from within JavaScript:

var portraitMatch = window.matchMedia("(orientation: portrait)");
portaitMatch.addListener(orientationChangeHandler);
function orientationChangeHandler() {
  if (portraitMatch.matches) {
    // Portrait orientation
  } else {
    // Landscape orientation
  }
}

Using Media Queries is considered a best practice when used in conjunction with a feature detection library such as Modernizr, or feature detection coding patterns to validate if a CSS feature works on a particular browser.  Feature detection code may be the way to go if you are only detecting a handful of CSS3 attributes and don’t need the extra weight of a full blown library like Modernizr, for example:

if (document.createElement("detect").style.borderRadius === "") {
    //TODO if borderRadius detected
}

if (document.createElement("detect").style.WebkitBorderRadius === "") {
    //TODO if webkitBorderRadius detected
}