Web Performance Calendar

The speed geek's favorite time of year
2011 Edition
ABOUT THE AUTHOR

The last fourteen years Alex Podelko (@apodelko) worked as a performance engineer and architect for several companies. Currently he is Consulting Member of Technical Staff at Oracle, responsible for performance testing and optimization of Hyperion products. Alex currently serves as a director for the Computer Measurement Group (CMG). He maintains a collection of performance-related links and documents.

It looks like there is great interest to quantifying performance impact on business, linking response time to income and customer satisfaction. A lot of information was published, for example, Aberdeen Group report Customers Are Won or Lost in One Second or Gomez whitepaper Why Web Performance Matters: Is Your Site Driving Customers Away? There is no doubt that there is a strong correlation between response times and business metrics and it is very good to have such documents to justify performance engineering efforts – and some simplification may be good from the practical point of view – but we should keep in mind that the relationship is not so simple and linear and it may be cases when it would matter.

Response times may be considered as usability requirements and are based on the basic principles of human-computer interaction. As long ago as 1968, Robert Miller’s paper Response Time in Man-Computer Conversational Transactions described three threshold levels of human attention. Jakob Nielsen believes that Miller’s guidelines are fundamental for human-computer interaction, so they are still valid and not likely to change with whatever technology comes next. These three thresholds are:

  • Users view response time as instantaneous (0.1-0.2 second)
  • Users feel they are interacting freely with the information (1-5 seconds)
  • Users are focused on the dialog (5-10 seconds)

Users view response time as instantaneous (0.1-0.2 second): Users feel that they directly manipulate objects in the user interface. For example, the time from the moment the user selects a column in a table until that column highlights or the time between typing a symbol and its appearance on the screen. Robert Miller reported that threshold as 0.1 seconds. According to Peter Bickford 0.2 second forms the mental boundary between events that seem to happen together and those that appear as echoes of each other.

Although it is a quite important threshold, it is often beyond the reach of application developers. That kind of interaction is provided by operating system, browser, or interface libraries, and usually happens on the client side, without interaction with servers (except for dumb terminals, that is rather an exception for business systems today). However new rich web interfaces may make this threshold important for consideration. For example, if there is logic processing user input so screen navigation or symbol typing becomes slow, it may cause user frustration even with relatively small response times.

Users feel they are interacting freely with the information (1-5 seconds) : They notice the delay, but feel that the computer is “working” on the command. The user’s flow of thought stays uninterrupted. Robert Miller reported this threshold as one-two seconds.

Peter Sevcik identified two key factors impacting this threshold: the number of elements viewed and the repetitiveness of the task. The number of elements viewed is, for example, the number of items, fields, or paragraphs the user looks at. The amount of time the user is willing to wait appears to be a function of the perceived complexity of the request.

Back in 1960s through 1980s the terminal interface was rather simple and a typical task was data entry, often one element at a time. So earlier researchers reported that one to two seconds was the threshold to keep maximal productivity. Modern complex user interfaces with many elements may have higher response times without adversely impacting user productivity. Users also interact with applications at a certain pace depending on how repetitive each task is. Some are highly repetitive; others require the user to think and make choices before proceeding to the next screen. The more repetitive the task is the better should be response time.

That is the threshold that gives us response time usability goals for most user-interactive applications. Response times above this threshold degrade productivity. Exact numbers depend on many difficult-to-formalize factors, such as the number and types of elements viewed or repetitiveness of the task, but a goal of two to five seconds is reasonable for most typical business applications.

There are researchers who suggest that response time expectations increase with time. Forrester research of 2009 suggests two second response time; in 2006 similar research suggested four seconds (both research efforts were sponsored by Akamai, a provider of web accelerating solutions). While the trend probably exists (at least for the Internet and mobile applications, where expectations changed a lot recently), the approach of this research was often questioned because they just asked users. It is known that user perception of time may be misleading. Also, as mentioned earlier, response time expectations depends on the number of elements viewed, the repetitiveness of the task, user assumptions of what the system is doing, and interface interactions with the user. Stating a standard without specification of what page we are talking about may be overgeneralization.

Users are focused on the dialog (5-10 seconds): They keep their attention on the task. Robert Miller reported threshold as 10 seconds. Users will probably need to reorient themselves when they return to the task after a delay above this threshold, so productivity suffers. Or, if we are talking about Web sites, it is the threshold when users start abandoning the site.

Peter Bickford investigated user reactions when, after 27 almost instantaneous responses, there was a two-minute wait loop for the 28th time for the same operation. It took only 8.5 seconds for half the subjects to either walk out or hit the reboot. Switching to a watch cursor during the wait delayed the subject’s departure for about 20 seconds. An animated watch cursor was good for more than a minute, and a progress bar kept users waiting until the end. Bickford’s results were widely used for setting response times requirements for web applications.

That is the threshold that gives us response time usability requirements for most user-interactive applications. Response times above this threshold cause users to lose focus and lead to frustration. Exact numbers vary significantly depending on the interface used, but it looks like response times should not be more than eight to 10 seconds in most cases. Still, the threshold shouldn’t be applied blindly; in many cases, significantly higher response times may be acceptable when appropriate user interface is implemented to alleviate the problem.

So while there is a strong correlation between response times and business metrics, it is definitely not a linear function. We are touching here the psychology of human-computer interaction and it is definitely not a single-dimension issue. It is very context-specific and published data should be used carefully with understanding what really stands behind them. The main practical conclusion is that you may have a point when further performance improvement won’t make much sense: you have increasing costs of performance improvement with diminishing business value. Although it looks like most existing systems haven’t reached this point yet.