Archive for the ‘Browsers’ Category

Offline JavaScript Part 3 – Intermittent Offline

This is Part 3 of my offline JavaScript series and it covers intermittently offline web apps. The vast majority of web apps are built on the false assumption that the internet will always be available. Yes, the internet is available the vast majority of the time, and most of us rarely encounter issues. However when, not if but when, the internet fails most web apps simply crash and burn in fairly spectacular fashion. I suggest a different approach that there are many, many common use cases that can benefit from offline capabilities in both consumer and professional apps.

As discussed in Part 1, intermittently offline web apps are designed to gracefully handle the occasional, temporary internet connection hiccup. The goals of an intermittent offline app are to make the offline capabilities are lightweight, invisible to the user, and allow the user to seamless pass thru a temporary loss of data connectivity.

The good news, as discussed in Part 2, is you can use a variety of libraries and APIs to solve many of the challenges related to partial offline including detecting whether or not you have an internet connection, and handling of http requests while offline.

How do I decide if I need intermittent offline capabilities?

If you answer ‘yes’ to the following question then you need to consider adding offline capabilities:

Does the app have any critical functionality that could fail if the internet temporarily goes down?

Critical functionality means functionality that’s important to your core business. And to be realistic I’m not talking about building fully armored applications that take every possible contingency into account. That’s just not feasible for the vast majority of non-military-grade applications. Some of the most common use cases are filling in forms and requesting data. And, temporary interruptions can be vary anywhere from a few seconds to a few minutes or longer, and they can happen once or multiple times.

If your application can’t handle this and it needs to then making changes to allow it to be offline can make a big difference to the user. It’s almost as if web development should have it’s own version of “Do no harm” or something like “we can do our best to make users lives easier.” You might be surprised that some very simple and common use cases can benefit from being offline enabled such as filling in form data, or reading an online article.

Filling in form data. This has probably happened to everyone who uses the internet and it applies to both retail/consumer and commercial applications. You spend a while filling out a detailed web form only to have the submit fail and destroy all your hard work because of a temporary interruption in the internet connection or something simply went wrong between the app and the web server.

If our form data was offline-enabled we could store the form data in LocalStorage before attempting to send the data to the server. We could also temporarily prevent the web form from submitting and notify the user there is no internet connection.

Reading an online article. In this scenario you are reading an article while waiting for a train.  Once you get on the train you know the internet will be marginal. You accidentally click on navigation link while scrolling down and the new page fails to load. This effectively ruins your browsing experience because the new page failed to load and you can’t go back to the previous page because it wasn’t cached..

There are a number of different ways to protect this type of application. The easiest way is to block any page load requests until the internet is restored. You can also take advantage of the built-in browser cache to store HTML, CSS, images and JavaScript.

Show me an example workflow?

The most basic workflow takes into consideration the following questions. How these questions get answered depends on your requirements.

  • Do you allow users to restart apps while offline?
  • Do you simply block all HTTP requests and lock down the app?
  • Do you queue HTTP requests and their data?
  • Do you pre-cache certain data?
  • How will you detect if the app is online or offline?

Here is an example coding pattern for the most basic intermittent offline workflow:

What about Offline/Online detection?

If you have no control over what browsers your customers choose, then my recommendation is to use a pre-built library such as Offline.js to check if the internet connection is up or down. It’s not perfect but it’s the best choice out there as of the writing of this post.

Don’t only rely on the window.navigator.online property. It has too many inconsistencies and it is only marginally reliable if the general public is using your app.

What about caching?

There are several built-in browser caching mechanisms that can help your app get past the occasionally internet hiccup. When your app goes offline, you’ll have to rely on local, in-browser resources to keep things going:

  • Browser Caching
  • LocalStorage
  • IndexedDB

As mentioned above, browser caching can be a very efficient way to store HTML, JavaScript, images and CSS. Depending on how you set up your web server, this caching takes place automatically in the users browser and can represent a huge performance gain in eliminating HTTP round trips. I’m not going to talk much about this because there are a ton of great online resources already out there.

Using LocalStorage involves writing JavaScript code if you want to temporarily store HTTP requests. It’s limited to String-based data, so if you are using Objects or binary data you’ll have to serialize the data when you write it to LocalStorage and deserialize when you read it out. LocalStorage also almost always has a limit in terms of how much storage is available. 5MB is the commonly accepted limit.

IndexedDB, on the other hand, stores a wide variety of data types and can store significantly more than 5MB. While in theory the amount of storage space available to IndexedDB is unlimited, practical application of it on a mobile device limits you to around 50MB – 100MB. Your mileage may vary depending on available device memory, the current memory footprint of the browser and the phone’s operating system.

IndexedDB can work natively with types String, Object, Array, Blob, ArrayBuffer, Uint8Array and File. This offers a huge pre- and post-processing savings if you simply are able to pass data directly into IndexedDB.

There are also a number of abstraction libraries that wrap LocalStorage and IndexedDB such as Mozilla’s localForage. These types of libraries are great if you have requirements to store 5MBs of data or less. If your app is running a browser that doesn’t support IndexedDB or WebSQL (e.g. Safari), and you need more than 5MBs of space then you’ll have problems. One potential advantage of some of these libraries is that some of them provide their own internal algorithms for serializing and deserializing data. If working directly with algorithms isn’t your thing, then a library like this can be a huge benefit.

Can you show me some code?

Yes! Here is a very simple example of how to implement basic offline detection into your apps. It’s easiest to try it in Firefox since you can quickly toggle it online/offline using the File > Work Offline option.

The code is available at: http://jsfiddle.net/agup/1yxj5mzp/. You’ll notice two things when you go offline. First is that jsfiddle, itself, will detect you are offline in addition to the web app code. When you go to click the Get Data button while offline, the code sample should detect you are offline and fire off a JavaScript alert.

<!DOCTYPE html>
<html>
<head lang="en">
    <meta charset="UTF-8">
    <title>Simple Offline Demo</title>
</head>
<body>
<div id="status">Status is:</div>
<button onclick="getData()">Get Data</button>
<!-- This is our Offline detection library -->
<script src="http://github.hubspot.com/offline/offline.min.js"></script>

<script>

    // Set our options for the Offline detection library
    Offline.options = {
        checkOnLoad: true,
        checks: {
            image: {
                url: function() {
                    return 'http://esri.github.io/offline-editor-js/tiny-image.png?_='
                        + (Math.floor(Math.random() * 1000000000));
                }
            },
            active: 'image'
        }
    }

    Offline.on('up', internetUp);
    Offline.on('down',internetDown);

    var statusDiv = document.getElementById("status");
    statusDiv.innerHTML = "Status is: " + Offline.state;

    function getData() {

        // See if internet is up or down
        Offline.check();

        switch (Offline.state) {
            case "up":
                // If the internet is up go ahead and retrieve data.
                getFeed(function(success,response){
                    if(success){
                        alert(response);
                    }
                })
                break;
            case "down":
                alert("DOWN");
                break;
        }
    }

    function getFeed(callback) {
        var req = new XMLHttpRequest();
        req.open("GET",
                "http://tmservices1.esri.com/arcgis/rest/services/LiveFeeds/Earthquakes/MapServer?f=pjson");
        req.onload = function() {
            if (req.status === 200 && req.responseText !== "") {
                callback(true,req.responseText);
            } else {
                console.log("Attempt to retrieve feed failed.");
                callback(false,null);
            }
        };

        req.send(null);
    }

    function internetUp(){
        console.log("Internet is up.");
        statusDiv.innerHTML = "Status is: up";
    }

    function internetDown(){
        console.log("Internet is down.");
        statusDiv.innerHTML = "Status is: down";
    }
</script>
</body>
</html>

Are there any examples of real-life offline apps or libraries?

The github repository offline-editor-js is a full-fledged set of libraries for taking maps and mapping data offline and it’s being used in commercial mapping applications around the world. It includes a variety of sample applications that demonstrate how applications can work in either intermittently or fully offline mode.

Wrap-up

Hopefully you have seen that common use cases can significantly benefit from having basic offline capabilities. Modern browsers have advanced to the point where it’s fairly easy to build web apps that can survive intermittent interruptions in the internet. Taking advantage of these capabilities can offer a huge benefit to your end users.

Resources

Optimizing content efficiency – HTTP caching
Offline-editor-js – Offline mapping library

Tags: , , , , , ,
Posted in Browsers, JavaScript | No Comments »

In Part 1 we looked at the differences between partial and fully offline use cases. Part 2 provides an overview of the HTML5 Interfaces and JavaScript APIs that make it possible to go offline with web applications. Going offline involves working with multiple pieces and coding for specific patterns. I’ve tried my best to stick to technology that is widely available across the largest variety of browsers.

Offline dependencies

Offline web applications are dependent on three things.  It doesn’t matter if your application is partially or fully offline, you’ll still need to address these in your code.

  • Caching HTML, CSS and JavaScript
  • Data Storage
  • Offline/Online detection

Caching

Application Cache. The Application Cache, or AppCache, interface lets you specify and store HTML and CSS files as well as JavaScript libraries so that they are available from the browser’s native cache. Once an item is in the cache the browser will use it regardless of whether it’s online or offline. It’s almost like you never went offline!

The AppCache is an essential part of your application strategy for allowing offline browser reloads or restarts. Without this an application will simply fail to re-load while offline.

Data Storage

Browsers have a variety built-in JavaScript APIs for storing data. The data can be for maintaining the applications state such as for storing bookmarks and form data. Or, it can be used for storing information such as maps, address and phone lists, TO-DOs or points of interest for a vacation.

LocalStorage. The LocalStorage API is super-easy to use. It stores Strings in simple key/value pairs. It’s limited to about 5MBs on most browsers. The two main challenges you’ll run into with LocalStorage are hitting the storage limit and performance hits when serializing and deserializing data.

IndexedDB. IndexedDB is essentially an asynchronous noSQL database that lets you store a wide variety of datatypes so that you don’t have to deal with serialization/deserialization.  Datatypes include String, Object, Array, Blob, ArrayBuffer, Uint8Array and File. While many online sources will tell you that there isn’t a size limit, I’ll tell you that in general you should limit your storage on a mobile device around 50 – 100MBs to help prevent the browser from crashing.

WebSql. It’s widely recommended that you not build applications directly on WebSql. The World Wide Web Consortium (W3C) is letting this standard die off in favor of IndexedDB and LocalStorage. I’m really only including this here for reasons such as Safari 7 and and the Android native browsers before 4.4 only support WebSql. For more information on how to get around this read down to the section on IndexedDBShim.

3rd Party Browser Storage

If the built-in browser storage capabilities aren’t meeting your needs you still have other options.

IndexedDBShim. IndexedDBshim is a Polyfill for WebSQL-based browsers. Because IndexedDB isn’t natively supported on older versions Safari 7 and Opera you can use this 3rd party shim to transparently translate your IndexedDB code to work across Android and iOS.

PouchDB. PouchDB is an Open Source experimental library that is an attempt to smooth some of IndexedDB’s rough edges as well as provide additional functionality, such as the ability to sync with remote stores.

LocalForage (Mozilla).  LocalForage is also an attempt to bridge the gap between LocalStorage and IndexedDB. It gives you an interface that provides much wider browser coverage than IndexedDB by itself.  One of the downsides is the amount of storage you can use. If a user is on an older browser such as IE8 that’s limited to LocalStorage then that user will be limited to storing about 5MBs of data. If you requirements call for using more than that, such as downloading large address lists, then the app won’t work on that browser or you’ll have to build in some sort of paging mechanism that deletes the old data and brings in the new.

Offline/Online Detection

There are a number of ways to detect if the browser is online or offline as well as when the internet status changes.

NavigatorOnline.online.  Some browsers have a built-in detection mechanism. However, it is not always reliable and false positives are a distinct possibility. For that reason, you will have to build additional detection capabilities or lean towards a 3rd party library.

Offline.js. Offline.js is a small Open Source library (~3KB) that detects when you lose an internet connection and when it comes back up. While not perfect, it does handle a lot of cross-browser compatibility issues for you. And, if you find bugs you can always create a fix and submit pull requests.

References

Caniseuse – IndexedDB

Caniuse – LocalStorage

Caniuse – WebSQL

Let’s Take This Offline

Tags: , , , , , , ,
Posted in Browsers, JavaScript, Mobile | Comments Off

5 ways that passwords get compromised

Over the last month as I’ve been doing normal updating of passwords I came across several major public company websites that gave me the following error:

The password you entered is invalid. Please enter between 8 and 20 characters or numbers only.

Anyone that knows anything about “Passwords 101” knows if you go with a minimum of 8 characters and/or numbers it could problematic if someone hacked into the password database. Security experts say that excellent password security includes alphabetical, numeric as well as non-alpha-numeric characters. A password of 8 characters and/or number could be hacked in micro-seconds.

This nicely dovetails with a number of conversations that I’ve had about this recently, and there is so much speculation about password security that I felt compelled to list the potential security holes for passwords. There is very little that you can do beyond minding your own passwords strength. The rest is up the companies and organizations that host your data. Here’s my list of the most common ways that passwords can get compromised.

Inadequate passwords – I suppose it makes sense to start off my list with this topic. But first I have a few important words about password strength. Simple passwords, such as those containing a limited amount of numbers and letters, for example “Test1234″, can be cracked in milliseconds on a typical laptop. When criminals get ahold of a username/password list the first they do is called a dictionary attack in which they try to compromise the easiest to break passwords first.

Unencrypted passwords stored on database – Not encrypting passwords in some way is the same thing as leaving the keys in the ignition of your car with the door unlocked. This is like ice cream to anyone that has access to the database, legally or illegally. They can simply download the ready-to-use user names and passwords.  It doesn’t matter how sophisticated of a password you have if it’s simply unencrypted. There is no way for the user to know if the passwords are encrypted or not, it’s completely up to the IT department that controls the database.

Phishing virus – This virus can provide usernames and passwords for a targeted organization. The intent of this virus is to trick someone into entering their username and password on a fake website that mimics another website, such as an email logon screen. Once someone enters their credentials, they are immediately available to the criminals. The best prevention against phishing viruses is to not open any suspicious attachments and to keep your virus software up to date. There is not a 100% cure against getting a phishing virus since any attachment, even from legitimate sources, could be compromised. But, a good start is not opening ones from someone you don’t know or one that has a suspicious sounding name. Trust your instincts.

Keystroke loggers – This is a spy program that can be installed on any computer. They can be very hard to detect and they do exactly as advertised: they log everything you do on a computer and then they typically relay that information to somewhere else on the internet where your data can examined. The best protection against keystroke loggers, and viruses as well, is a multi-faceted approach: anti-virus programs, spyware sweepers and software firewalls. In addition to that, occasionally viewing which programs are running on your computer and researching any program names you don’t recognize.

Network Sniffers – Sniffer software can monitor all internet traffic over a network.  Network sniffers can easily compromise public WiFi. The person who collects the sniffer data can sift thru the digital traffic information and then siphon out different types of login requests. Once they have the login information, if it’s encrypted they can run cracker programs against the encrypted information. As an internet surfer there is something you can to to help prevent getting your username and password siphoned off: purchase a consumer Virtual Private Network (VPN) product and always use it, especially if you are on an open WiFi at some place like Starbucks or an Airport.

Conclusion

Is there such thing as a truly secure password? Definitely not. This is especially true if the database server containing the password has less-then-adequate security measures in place to protect it from unauthorized intrusion.

Is there anything you can do to protect yourself? Absolutely. My minimum recommendation is to get a password manager program. They can create strong, unique passwords for every website that you need. Some password managers also let you securely share passwords between phone, tablet and laptop. Search for the words “password manager” to find out more. Here’s an article you can peruse as a starting point: pcmag.com. You should also keep your anti-virus software up-to-date and regularly run spyware scanners. Some folks go even one step further and install a software firewall that lets you control all communications to and from your computer. Last, but not least you can use VPN software when surfing the internet.

So what did I do about the websites mentioned above with poor security? In one case I wrote the CEO of the company and I also switched to using the maximum number of characters and numbers, which in the case mentioned above was 20. That website didn’t really store anything vital. In other case, I dropped the website like a lead balloon. If their password security is less-than-optimal I couldn’t help but question and wonder about the rest of their digital information security practices.

Tags: , ,
Posted in Browsers, Security | Comments Off

Deleting an HTML Application Cache

When you are testing web applications that use an Application Cache, also sometimes called the manifest file, you have to delete this file every time you make a change to the application. If you don’t then none of the changes you make to the application will show up. The very purpose of the Application Cache is to semi-permanently store your HTML, CSS, JavaScript and images. It’s becoming increasingly popular for speeding up web app performance, and a requirement for taking web apps offline. In fact, Google now uses an application cache for their home page.

Simply trying to delete your browser cache in the normal way won’t necessarily clear the Application Cache and its associated files. So here’s a quick rundown that will hopefully save you some time.

Chrome – browse to chrome://appcache-internals/.  There may be a number of different caches listed. Select ‘Remove’ for any cache that you want to go bye-bye.

Chrome (Mobile Android) – go to Settings > Privacy (under Advanced) > CLEAR BROWSING DATA, checkbox the ‘Clear the cache’ option and then select the ‘Clear’ button.

IE 10 – go to Tools > Internet Options > Settings > Caches and databases tab. Select the cache that you want to delete and the click the ‘Delete’ button.

Safari (Mobile) – For Safari iPhone and iPad go to Settings and select “Clear Cookies and Data.”

Safari (Desktop) – Simply attempting Develop > Empty Caches may not work. On a Mac you may have to: close your browser, manually delete the .db file by going to //library/Caches/com.Apple.Safari and move any item ending in .db to the trash, then restart browser. If this doesn’t work then try restarting your machine. Yep, it’s an awful workflow and it’s been a known bug in Safari dating back to at least version 6.

Firefox (Desktop) – go to Tools > Options > Advanced > Network > Offline data > Clear Now.

Want to learn more about Application Cache’s? Here’s a good technical overview from WHATWG describing what is an application cache. And, MDN has a good article on Using the application cache.

Tags: , , , ,
Posted in Browsers | Comments Off

Making coding changes and reloading locally hosted web pages over and over again is a pattern familiar web developers world wide. Another familiar pattern is to constantly wonder if your changes are caching in the browser and not being properly reflected in what you are seeing.

Fear not…there is a very easy fix for this and it doesn’t involve using the browsers empty cache options every single time between page reloads. Simply tell your local web server to send the browser a “no-cache” pragma directive in the HTTP header and then you should be good-to-go.

Once you make this change every web page you serve locally will automatically refresh, every single time. Here’s what the W3C has to say about no-cache headers:

 When the no-cache directive is present in a request message, an application SHOULD forward the request toward the origin server even if it has a cached copy of what is being requested. 

Make the change in Apache. Here’s how you make the change in your /etc/apache2/httpd.conf file on the latest Mac OS running 10.8+. Depending on how your machine is set up you can run the command “sudo pico httpd.conf” then enter your admin password and use the short cuts listed at the button of the pico window or use the ‘up’ and ‘down’ buttons on your keyboard to navigate around the file. Typically, the following text is pasted below any other ‘filesMatch’ tags that may reside in the configuration file. Once you are done be sure to restart apache. On Mavericks the command is “sudo apachectl start”:

<filesMatch "\.(html|htm|js|css)$">
    FileETag None
<ifModule mod_headers.c>
    Header unset ETag
    Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
    Header set Pragma "no-cache"
    Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"
</ifModule>
</filesMatch>

Make the change on IIS 7. If you want to make the change on Windows 7, Windows 2008/2008R2 or Vista then here is a link to Microsoft Technet. If you are using IIS Manager, in Step 4 choose the expire immediately options. Or, if you are using the command line copy this line and run it:

    <b>appcmd set config /section:staticContent /clientCache.cacheControlMode:DisableCache</b>

If you have some other operating system version hopefully you get the idea from the suggestions above and apply similar changes for your system.

Optimizing your own public web server cache settings. One last note, the no-cache header setting is typically only used in a development environment. To get the best page performance for your visitors you should allow some browser caching. If you want to learn more about optimizing browser caching here is a good article.

References

Optimizing Headers (Google)
RFC 2616, Section 14 Header Field Definitions
Configure HTTP Expires Response Header (IIS 7)
Manipulating HTTP headers with htaccess (you can make the same no-cache header change in httpd.conf in Mavericks)

Tags: , , , ,
Posted in Browsers | Comments Off

This post is about web applications designed for online-only usage that for reasons beyond your control will occasional go offline, or appear to have connection problems to non-techy end users. Even though we expect it, connectivity is not guaranteed. The good news: there are many things that you can control to help improve the usability of your sites and the perception of its uptime.

The Internet is inherently unreliable and it goes up and down as well as faster and slower all the time. It’s even more unreliable if you are talking about mobile web as compared to being plugged into a dedicated Ethernet or WiFi connection. Failures can happen within the app, on the Internet connection and even at the web server or CDN and when it happens it can frustrate users and eventually turn them into unhappy customers. The challenge for you as web developers and IT managers: it’s often hard for the people managing websites to get a real good look at the end-user experience because it can be so hard to duplicate.

In general most users typically blame their “internet connection” which is a euphemism for it’s the cellphone providers fault or the DSL or cable company’s fault. And, most people don’t know or really care where the problem is, they just want it fixed.  A common reflex when there is a problem is for a user to simply reload the entire page. In some cases, a full page reload isn’t possible or it’s painful such as more complex sites where a full reload means potentially walking about through several steps to get to back to the final page or view.

So here are a few suggestions to you, as a web developer, to help minimize occasional disruptions and keep users as happy as possible. Some of these are major repeats but they are well worth seeing yet again:

Performance. Make your web pages as lightweight as possible. Pages that load faster will ‘appear’ to be more responsive to requests even if you aren’t concerned about millisecond response times. Most of you will have already had this drilled into your head over and over: The goal should be fewer and smaller files, using CDNs, moving CSS and JavaScript library loading operations to the bottom of your html pages, use inline images and the list goes on and on. There are many articles on the web about improving performance. Search for ‘website performance’ to find out more. Another example, Steve Souder has an excellent website and even written books on the subject.

Caching. Consider page cache settings carefully. The subject of setting header caches, such as ETags, Expires and Last-Modified headers, is often overlooked and usually misunderstood. Cached content cuts down on the total number of HTTP requests when someone loads your web page. Static content, or content that doesn’t change much, usually has longer cache times than content that changes frequently.  Even though there are many articles on the web about caching, doing it well can be tricky. It can be very handy to hire an expert to figure out optimal configurations in a short period of time. Or in my case, I spent several months of experimenting while subjecting my blog readers to unnecessary page lag, and variety of other problems, until I finally broke down and hired an expert.

HTTP requests that block. Be aware of any HTTP operations that block the loading or use of your pages.  If you have to use a blocking HTTP request then make sure you set a timeout in the client request, such as 20 seconds and display some sort of a loading icon. A good web designer can help walk you through the UI experience. Most modern web servers have server-based timeouts that are longer than most people are willing to wait.

Auto-retry. Alternatively, consider a significantly shorter HTTP timeout setting and retry the connection several times before failing and notifying the user that the app couldn’t connect. These days a single 404 error doesn’t necessarily mean the website is down. But…very, very few websites employ this pattern. So what happens in response is most people reflexively keep hitting reload when there are any loading problems. Reloading an entire page is much more bandwidth intensive on your servers as compared to having the app simply retrying quietly and quickly in the background to load a specific item.

More efficient database polling. Long running database queries can give the impression that the connection is broken. If you have requirements to poll a server-side database for changes, consider implementing a server-based process that simply returns a JSON-based Boolean such as {changes: “false”} if there are no changes. In comparison, most server-side database requests typically run entire and potentially complex SQL queries with every internet request to tell you nothing changed.  From a server resource preservation viewpoint, it’s significantly less overhead to return a simple JSON-based Boolean and let a long-running server side process do all the heavy lifting on a regular timer cycle.

Fail gracefully.  Don’t hang an entire page if your app fails to load a JavaScript library or some other content throws a 404 error, or if a database request fails. Don’t do it. I know this seems obvious, but I see it all the time when doing my daily web surfing. See my suggestions above for handling HTTP requests. Most major companies seem to be guilty of this for activities such as viewing billing pages.  Let the end user know through some sort of a pop-up that a connection has failed or timed out. Native mobile applications have built in mechanisms for doing this, and granted they can auto-detect when the Internet connection goes down, but I still believe regular web apps should mimic the behavior when possible.

ApplicationCache. Consider storing some pages and resources for when a connection goes down by using the HTML5 ApplicationCache interface. This lets you go beyond the typical caching mechanisms using patterns that can be easier to understand and control as compared to the somewhat black box and variable nature of header settings.

Feedback. The ability to email web administrators directly has lost favor over the last five years or so. I suggest bringing this back in a big way, along with clearly posted links. Sometimes the best way to know something is down or slow is to hear it directly and immediately from a customer. Yeh sure, you’ll get some spam email but if it means keeping customers happy then there are both automated and manual ways to deal with it that work. I can speak personally on this topic as my blog has received over 40,000 spam attempts of which I’ve personally deleted over 3,000, and I’m just a team of one. Some techy sites do provide a “Performance” section of their forums, which is fine as long as employees are actually monitoring it (often). The problem with forums is notification of new posts…and, of course, is usually done via email.

Uptime Monitors. Use uptime monitors from different spots around the country you live in, or around the world if you are using a worldwide CDN. Some providers can do this for you, but you should ask questions. The most common scenario I’ve seen is that the update monitor lives in the same server farm as the web server. This is okay but it doesn’t cover the scenario of connectivity outside your firewall. Uptime monitors should not just ping a website, they should also attempt to load and parse actual content, throw a warning email or text message if the content throws an error and throw a warning if a connection takes too long. There are many reasons why you may think your website is up and it’s not. For example, a CDN node could be down, a CDN server could have the wrong permissions, a major Internet router could be down, or your support folks could be using an internal pathway to view pages on your web server that is no longer visible to the outside world. These types of monitors don’t cost much to operate and can significantly boost customer service ratings and help keep customers happy.

Browser Support. Last but not least and probably the touchiest subject is browser support. My recommendation is if you don’t support a particular browser type, then give the end user a message that says some functionality may not work properly. We’ve all been to sites on our tablets or phones, for example, and popups didn’t work right or things didn’t display properly. Non-tech -savvy end users can easily misunderstand these types of things since it rightly gives the appearance that something is broken. If a popup didn’t work it may appear that a sale did not complete, for example. It’s very easy these days to use libraries for browser detection. Doing browser detection should always be part of a web app deployment plan.

Resources

HTTP Caching Protocols (W3C)

What is a CDN?

Beginners Guide to ApplicationCache

Browser support – Caniuse.com

Tags: , , ,
Posted in Browsers, Internet | Comments Off