Hello, fellow web perf enthusiast! Contribute to the 2024 edition happening this December. Click.

Web Performance Calendar

The speed geek's favorite time of year
2022 Edition
ABOUT THE AUTHOR
Tanner Hodges photo

Tanner Hodges (@atannerhodges) is a web developer at Red Ventures, trying to help people understand complex things in simple ways.

Prologue

In 1949, Claude Shannon married a computer.

Her name was Betty.1

Mary Elizabeth “Betty” Moore graduated from the New Jersey College for Women in 1948 and began work the next day as a computer at Bell Laboratories.2

Her job was to perform mathematical calculations for Bell Labs engineering teams and her performance was based on the speed and accuracy of her results.3

Introduction

The history of web performance is the history of computer performance.

Less than 100 years ago, computers were people. Since then we’ve gone from gear-driven analog calculators to interstellar digital systems.4

How did we get here?

World War II: The Dawn of Modern Computing

In the 1940s, we needed firing tables—large spreadsheets of numbers that helped gunners aim their massive artillery.

Through the middle of the war, the Applied Mathematics Panel found that the expense of human computers was close to the cost of machine calculation. In a competitive bid between the Mathematical Tables Project and Bell Telephone Laboratories, Arnold Lowan estimated that it would cost $1,000 for his staff to do the work, while the computers under Thornton Fry stated that the calculations would require $3,000. The panel rejected both bids, judging that they exceeded the value of the computation.5

During the war, demand for computation skyrocketed.

With human computers hard to find and an old analyzer struggling to meet the precision requirements, the computing staff was looking to build an improved computing machine, an electronic version of the differential analyzer. This new machine, tentatively called the Electronic Numerical Integrator and Computer, or ENIAC, would have no mechanical parts that could slip or jam or in some other way induce inaccuracy.6

The U.S. Army spent over $400,000 to build the ENIAC.7

Newspapers hailed this new “brain” as being 5,000 times faster than human computers.8

How exactly did they measure the ENIAC’s speed? I’m not sure yet, but it’s clear that we’ve been measuring computer performance since day one—and speed mattered.

1950s: Speed! Accuracy! Reliability!

The race was on to make faster, more reliable computers.

The biggest constraint on performance was memory.

“The fast memory was not cheap, and the cheap memory was not fast.”9

The tradeoff was serial processing.

Fetch, execute, return. Fetch, execute, return… At least in terms of 1940s technology, [von Neumann] said, serial operation was the price to be paid for reliability. Moreover, he had a point: it was only in the late 1970s, with the availability of reliable and inexpensive microchips, that computer scientists would begin serious experimentation with “parallel” computers that could carry out many operations simultaneously.10

More Meaningful Metrics

One of the first people to seriously attempt parallel computing was Jay W. Forrester on project Whirlwind, a real-time flight simulator.

In 1951, Forrester gave a presentation titled “Digital Computers: Present and Future Trends”, where he critiqued the current state of computing and used the word “performance” over 20 times.

We are firmly on the threshold of a new field, but the digital computer work has reached no real maturity.

There are many machines to be compared, but for the most part we find ourselves without any test or comparison criteria. There is fair agreement regarding the definitions of speed and storage capacity in digital computers. The time has come for more sophisticated criteria of comparison.

Forrester pushed for more meaningful metrics.

There can be no significance in cost figures, unless accompanied by a measure of performance and reliability. We discuss the number of vacuum tubes in a machine, but this in itself lacks significance unless we have a measure of the utility of those tubes, in other words how much computation will they perform for us.

He outlined areas for improvement and offered suggestions, such as:

  • Storage Criteria: bits accessed per microsecond
  • Design Efficiency: operations per second per unit of equipment
  • Reliability Criteria: maintenance hours per million operations

Reliability was the hot topic of the day.

If asked how good your watch is, you don’t say that it gives 1,439 minutes correct out of the day, but rather that it loses 1 minute per day. Ratios of inoperative time are better indices of quality than ratios of operating time.

The engineer here has a challenge to build a reliability into his computing machine comparable to that existing in other modern machines. The airplane engine operates 1,000 hours between overhauls and many times that between failures. Why not similar performance from a computer?

This is when computing put down roots.

How else did the field of computing mature in the 1950s? What measures did they use? What principles and best practices did they develop?

The 1950s were the Cambrian explosion of computing: new ideas, new devices, mass production, the first computer science courses, the first major programming languages, even the birth of artificial intelligence.11

And then came Sputnik

1960s: Time-Sharing, Graphics, and Networking

The launch of Sputnik rocked the nation like no event since Pearl Harbor… There was the growing crisis in command and control… The Nuclear Age had given it an urgency greater by several orders of magnitude.12

When ARPA went looking for someone to lead their Command and Control Research initiative, they found J. C. R. Licklider.13

Lick captured everyone’s attention inside the first few sentences. Many of the ARPA and DDR&E staffers had used batch-processed computers for number-crunching and such. But here was Lick talking about time-sharing, interactive graphics, networking—concepts an order of magnitude further advanced. “It was a revelation,” says Herzfeld.14

Licklider cast a vision for human-computer interaction. He went further than command and control. He fostered (and funded!) the community that ultimately created the Internet.15

It began with time-sharing.

Threshold of Impatience

The idea of multiple users simultaneously sharing a single computer (a server) began in earnest with Project MAC and CTSS.

“It was a sociological phenomenon,” says Fano… “All sorts of human things happened, ranging from people destroying keyboards out of frustration, to friendship being born out of using somebody else’s program.”16

CTSS’s fundamental model of human-computer interaction—the notion of a user’s sitting down at a keyboard and having a set of commands at his or her disposal—can still be found today at the heart of MS-DOS, Windows, Macintosh, Linux, and many other operating systems.17

In 1964, when the MAC System officially went online, its performance had an immediate impact on users.

Then there was the system’s sluggishness. When it came to raw processing power, the IBM 7094 was roughly equivalent to the first IBM PC, marketed in 1981. So when that small amount of power was divided among multiple users, the machine’s response time plummeted to what Fano called the Threshold of Impatience: “I could watch a user sitting there giving a command, and then I would see his hand falling down from the keyboard [into his lap]. That took place in about five seconds.”17

Bob Fano wrote in “The MAC System: A Progress Report”:

The performance figure of greatest interest to the user is the response time (the time interval between the issue of a request and the completion on the part of the computer of the requested task) as a function of the bare running time of the corresponding program. The response time depends on the scheduling algorithm employed by the system, as well as on the number and character of the requests issued by other users.

Here, in the 1960s, performance began to shift from the machine to the user.

Miller’s Response Times

In 1968, Robert B. Miller published his famous “Response Time in Man-Computer Conversational Transactions”.

This is the research cited by Nielsen, Google, and basically everyone who brings up the 100ms, 1s, and 10s thresholds for response times.

The literature concerning man-computer transactions abounds in controversy about the limits of “system response time” to a user’s command or inquiry at a terminal.

This paper attempts a rather exhaustive listing and definition of different classes of human action and purpose at terminals of various kinds. It will be shown that “two-second response” is not a universal requirement.

Rather than dictate a single time limit, Miller catalogued 17 different topics, each with its own recommended time delays (and some with special comments).

  1. Response to control activation: <0.1s or <0.2s depending on input type
  2. Response to “System, are you listening?”: <3s
  3. Response to “System, can you do work for me?”: <2s for routine requests; <5s for complex requests
  4. Response to “System, do you understand me?”: 2–4s
  5. Response to Identification: <2s or up to 7s depending on context
  6. Response to “Here I am, what work should I do next?”: Up to 10–15s
  7. Response to simple inquiry of listed information: <2s
  8. Response to simple inquiry of status: Up to 7–10s
  9. Response to complex inquiry in tabular form: <4s or staggered 2–4s per item
  10. Response to request for next page: <1s
  11. Response to “Now run my problem.”: <15s
  12. Response to delay following keyboard entry vs. light-pen entry of category for inquiry: <1s or 2–3s depending on input type
  13. Graphic response from light pen: <0.1s or <1s depending on interaction type
  14. Response to complex inquiry in graphic form: <2s for initial render; <10s for completion
  15. Response to graphic manipulation of dynamic models: Needs more study; unable to guess
  16. Response to graphic manipulation in structural design: <2s during creative effort; up to 1–2 minutes for completion
  17. Response to “Execute this command into the operational system.”: <4s to acknowledge; variable time for completion depending on context

Comment: “It is rude (i.e., disturbing) to be interrupted in mid-thought.” Miller recommends not interrupting users with error messages immediately. Instead, pause for 2 seconds and then indicate the error.

IBM’s Performance Analysis Tools

Robert B. Miller worked at IBM.18

IBM was clearly invested in performance and performance research. After all, they were the ones who introduced FORTRAN, the first major programming language, and they made the first mass-produced computer, the IBM 650.

They also, as far as I can tell, created the first performance monitoring tool: the Program Monitor.19

In 1961 a group was established within IBM to test systems programs before they were released for customer usage. The goal of this group was to assure IBM management that each program released would be satisfactorily usable by the customer.

One step taken by this group was to develop a monitor device which would permit programmers to record information being handled by the CPU during execution.

They discussed the Program Monitor’s value:

What is the value of the measurement potential displayed by the Program Monitor? The value of performance measurement in general is beyond the scope of this paper, but its value is generally accepted. But what does the Program Monitor contribute to measurement. Principally, it contributes the ability to observe the program at a macro level of detail, – the phase –, or at a minute level of detail, – the routine. It allows one to observe the interplay of components of the system and determine if a significant time advantage could be gained by overlapping certain events. It also allows the programmer to determine the advantage to be gained from recoding more efficiently a specific routine. Quite frequently it has been our experience that the programmer immediately recognizes areas that can be improved or design changes that should be implemented upon seeing this kind of output. Often we find the programmer saying after a monitor run “yes we knew this routine would be using a lot of time but we did not expect it to be that much”. Invariably, the next line is “if we recode this routine we can cut the time by…”

Performance tooling was just getting started.20

And so was the Internet.

ARPANET

After J. C. R. Licklider and Ivan Sutherland, ARPA’s next director of IPTO was Bob Taylor.

Taylor’s goal was to create a nationwide network, to make Lick’s Intergalactic Computer Network a reality. So in 1966 he got the funding and then pulled in Larry Roberts to spearhead the project.21

Basically, their plan was:

  • Full-Time Access: Lease AT&T phone lines to keep connections open and avoid dial-up delays between requests.
  • Messages Divided Into Packets: Break up messages into packets with error-correcting codes to overcome noise over long-distance lines.
  • Distributed Control: Share routing responsibilities across the network so no single failure can take it down, and use the old store and forward technique from telegraphy to keep packets from getting lost.

Performance was there from the beginning:

“Larry brought a bunch of us back to Washington to help him specify what this network would look like and what performance characteristics it would have,” says Kleinrock, remembering the one and only meeting of Roberts’s networking advisory committee. “I banged my fist on the table and said, ‘We’ve got to put measurement software in there! If this network is going to be an experiment, we must make measurements!’” Moreover, they needed data from day one: How many packets actually flowed through the network? When and where did they pile up into digital traffic jams? How often did they get lost? What kind of data rates did you actually achieve? What traffic conditions broke the network?22

In 1969, the ARPANET was officially online.23

This was the birth of internet performance: response times, reliability, packet sizes, and network speeds.

How did performance analysis differ between hardware, software, and networking? What tools did they use? How did they analyze the data? What lessons did they learn?

1970s: Computer Performance Evaluation

Bob Taylor went on to lead a new initiative at Xerox: the Palo Alto Research Center, or PARC. Long story short, PARC basically created the personal computer but Xerox dropped the ball and Steve Jobs ended up with the tech.24

A lot happened in the 1970s!25

All of a sudden, performance was taking off:

What started this trend?

I’m not sure, but at least one thing is clear: what began in the 1960s was formalized in the 1970s.

Computer Performance Evaluation was officially a thing™.

According to Computer Performance Evaluation (CPE): An Auditor’s Introduction:

CPE is a specialty of the computer profession that concerns itself with the efficient use of computer resources.

Tools or techniques used to conduct these measurements include accounting data reduction programs, software monitors, program analyzers/optimizers, hardware monitors, benchmarks, and simulation.

Computer performance evaluation, as we know it today, probably began between 1961 and 1962 when IBM developed the first channel analyzer to measure the IBM 7080 computer performance. The real push toward performance monitoring began between 1967 and 1969.

Sponsored by the U. S. Army and National Bureau of Standards, the Computer Performance Evaluation Users Group (CPEUG) finally published their papers to the general public in 1974.26

Meanwhile, Vint Cerf and Bob Kahn were busy hammering out the details of TCP/IP, the Internet protocol suite.

They published their paper in 1974, “A Protocol for Packet Network Intercommunication”, and then set to work “making the TCP/IP protocol suite as robust and as bulletproof as possible.”27

Two other events are worth noting before we jump to the 1980s:

  1. Donald Knuth wrote a paper in 1971, “An Empirical Study of FORTRAN Programs” that inspired a young Connie Smith to study software performance.28
  2. Aubrey Chernick started a small company called Candle in 1976. Their first product, OMEGAMON, was (and still is) a real-time performance monitor for IBM mainframes.29

1980s: Software Performance Engineering

Time’s Person of the Year for 1982 was the computer.

As personal computing took the world by storm, the ARPANET switched over to TCP/IP and the Internet was officially born.30

What else was going on in the 1980s?

Software.

More and more people were talking about software in the ’80s, how to develop it, how to manage it.31

In 1981, Connie U. Smith coined the term Software Performance Engineering.32

Software performance engineering (SPE) is a discipline used throughout the software life cycle… to ensure the satisfactory performance of large scale software systems.

It is currently extremely expensive to develop and maintain software systems. Thus, performance engineering in particular and quality engineering in general are becoming increasingly important aspects of software development.

Since then, Dr. Connie U. Smith has continued to develop the SPE method, creating a product around it (SPE-ED) and publishing two books (Performance Engineering of Software Systems and Performance Solutions) as well as a number of other publications.33

What does SPE have to say about the Web? What practices do we share? What lessons can we learn from each other?

Meanwhile, as competition was heating up in the PC market, the Internet was transforming.

Honestly, I’ve had a hard time following everything that happened next but here’s a few key events:

  • Universities lobbied the NSF to provide equal access to ARPANET in 1979.34
  • NSF launched CSNET in 1981 and NSFNET in 1986.35
  • NSF decentralized management of NSFNET, creating regional networks, and then forced those regional networks to go and find their own commercial customers (i.e., to become internet service providers).36
  • ARPANET users migrated to NSFNET and then ARPANET was decommissioned in 1990.37

In short, the Internet was growing up to live on its own.38

1990s: World Wide Web

Performance was everywhere in the ’90s.

We had “high performance” everything: from leadership and investing, to schools and skiing, to aircraft, sailing, and even concrete.39

Why? Who knows. Maybe it had something to do with the government’s new National Performance Review, but I can only speculate.40

Meanwhile, Tim Berners-Lee had quietly published his new WorldWideWeb project to the Internet on August 6, 1991, answering a question in the alt.hypertext newsgroup.

Nari Kannan asked:

Is anyone reading this newsgroup aware of research or development efforts in the following areas… Hypertext links enabling retrieval from multiple heterogeneous sources of information?

Tim wrote back:

The WorldWideWeb (WWW) project aims to allow links to be made to any information anywhere.

We have a prototype hypertext editor for the NeXT, and a browser for line mode terminals which runs on almost anything. These can access files either locally, NFS mounted, or via anonymous FTP. They can also go out using a simple protocol (HTTP) to a server which interprets some other data and returns equivalent hypertext files.

We also have code for a hypertext server. You can use this to make files available (like anonymous FTP but faster because it only uses one connection). You can also hack it to take a hypertext address and generate a virtual hypertext document from any other data you have – database, live data etc. It’s just a question of generating plain text or SGML (ugh! but standard) mark-up on the fly. The browsers then parse it on the fly. The WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data. Collaborators welcome! I’ll post a short summary as a separate article.

The rest, as they say, is history.

But did you notice? “Faster because it only uses one connection”! You can make virtual pages on the fly! Browsers parse markup on the fly!

I don’t know how he measured it, but Tim was thinking of performance from day one.

In fact, as he tells it in his book, Weaving the Web:

I therefore defined HTTP, a protocol simple enough to be able to get a Web page fast enough for hypertext browsing. The target was a fetch of about one-tenth of a second, so there was no time for a conversation. It had to be “Get this document,” and “Here it is!”41

The Internet Tidal Wave

The early Web was simple, but not for long.

Mosaic, the first major browser, famously introduced the <img> tag in 1993.42

Shortly after that, Marc Andreessen left the Mosaic team to join Jim Clark and co-found Netscape in 1994.

Netscape Navigator sparked a phenomenon.43

Netscape Navigator was a generational improvement over the other browsers then available. Navigator was fast, even working under the constraints of the slow modem speeds that were standard at the time. By some measurements, Navigator could load a webpage ten times faster than Mosaic.44

Again, I’m not sure how they measured this. But it’s clear that speed mattered. Features mattered. And browsers were becoming a battleground for performance.

On May 26, 1995, Bill Gates sent a memo to Microsoft’s executive staff titled “The Internet Tidal Wave”.

I have gone through several stages of increasing my views of its importance. Now I assign the Internet the highest level of importance… The Internet is the most important single development to come along since the IBM PC was introduced in 1981.

Amazingly it is easier to find information on the Web than it is to find information on the Microsoft Corporate Network. This inversion where a public network solves a problem better than a private network is quite stunning.

The strength of the Office and Windows businesses today gives us a chance to superset the Web. One critical issue is runtime/browser size and performance. Only when our Office – Windows solution has comparable performance to the Web will our extensions be worthwhile. I view this as the most important element of Office 96 and the next major release of Windows.

Gates listed several next steps for Microsoft.

His first two were:

  1. Server: “We need to understand how to make NT boxes the highest performance HTTP servers… Our server offerings need to beat what Netscape is doing including billing and security support. There will be substantial demand for high performance transaction servers.”
  2. Client: “First we need to offer a decent client (O’Hare) that exploits Windows 95 shortcuts. However this alone won’t get people to switch away from Netscape… Getting the size, speed, and integration good enough for the market needs work and coordination.”

A few months later, Microsoft released Internet Explorer version 1 in August and version 2 in November 1995.45

Thus began the First Browser War.

Killer Web Sites

As Netscape and Microsoft battled for market dominance, early web designers and webmasters scrambled to keep up with new features.

In 1996, Amazon.com’s #1 best-selling book was Creating Killer Web Sites by David Siegel, “the first true design book for the Web”.46

This was the book that taught us how to do table-based layouts. It was also one of the first books on web performance. Prominently displayed on the inside cover was Siegel’s credo:

Quality Brevity Bandwidth

Throughout the book, Siegel consistently reminds web designers of bandwidth, file sizes, screen sizes, and browser capabilities.

Above all, splash screens should load quickly. Your first screen should take no more than 15 seconds to load at prevailing modem speeds — faster if possible. Present your visitors with a tedious download, and they’ll be at Yahoo! before your access counter can tell you what happened.47

Yes, 15 seconds. The maximum speed for modems in 1996 was 33.6 Kbps. When you do the math, a 50KB file at those speeds takes ~13 seconds.

As Siegel points out in Chapter 1, 70% of web surfers at the time were connected via modems (i.e., dial-up).48

Web Surfer Demographics July 96 1998 2000
Modem (dial-up) users 70% 90% ?
Surfers at 14.4 Kbps 30% 3%
Surfers at 28.8 Kbps 40% 80% 10%
Surfers with ISDN (128 Kbps) or better 3% 12% 80%

Siegel shares how to measure and reduce file sizes, how to leverage caching, and even how to preload images for instant page transitions.49

He goes as far as making “the slow load” one of his seven deadly sins:

Deadly Sin Number Four: The Slow Load
Conversations among friends can survive long silent pauses, but few Web pages can afford to take long to load.
    A good rule of thumb is that most pages in a site should be under 30k, a few can be 30–50k, and perhaps one or two can weigh in at 70k. Pages larger than that should either belong to 800-pound gorillas or be put on a diet.
    If you want to force your visitors to go out to lunch while your page loads, fill it full of 8-bit dithered GIFs in the foreground, and don’t forget an enormous high-quality JPEG in the background.
    Spread out heavier loads by reusing elements cleverly: once loaded, they are cached and therefore load again almost instantly.50

Early web performance boiled down to small files, snappy servers, and praying to the Internet gods for a stable connection.

Measuring the Network

The biggest constraint on early web performance was bandwidth.

While telecommunications companies were busy building up infrastructure, the web community was looking for ways to optimize.

Most of the discussion was around networking and servers.

When I told people I was writing a book called Web Performance Tuning, the usual response I got was that the title should be “Web Server Performance Tuning.” Most people believe that the server is the only part of the Web that you can tune.51

Experts used established computing metrics for web performance.

Latency and throughput are the two most important performance metrics for Web systems… In summary, the most common measurements of Web server performance are:

  • connections per second
  • Mbits per second
  • response time
  • errors per second

Client response time includes latency at the server, plus the time spent communicating over the network, and the processing time at the client machine (e.g. formatting the response). Thus, client-perceived performance depends on the server capacity, the network load, and bandwidth, as well as on the client machine.52

In the mid ’90s, companies started cropping up to measure the Web’s response times.

In 1997, Keynote Systems published their “Business 40 Internet Performance Index”, as well as their “Top 10 Discoveries About the Internet”.

Some of their results were surprising:

Internet performance problems are generally not server problems. For many web sites, Keynote’s measurements demonstrate that most of their performance problems occur out in the Internet’s infrastructure somewhere between the web site and its users.53

That same year, the W3C published a study of their own titled “Network performance effects of HTTP/1.1, CSS1, and PNG”.

We describe our investigation of the effect of persistent connections, pipelining and link level document compression on our client and server HTTP implementations.

For all our tests, a pipelined HTTP/1.1 implementation outperformed HTTP/1.0… The savings were at least a factor of two, and sometimes as much as a factor of ten, in terms of packets transmitted. Elapsed time improvement is less dramatic, and strongly depends on your network connection.

To our surprise, style sheets promise to be the biggest possibility of major network bandwidth improvements, whether deployed with HTTP/1.0 or HTTP/1.1, by significantly reducing the need for inlined images to provide graphic elements, and the resulting network traffic. Use of style sheets whenever possible will result in the greatest observed improvements in downloading new web pages, without sacrificing sophisticated graphics design.

Who else was measuring the performance of new web technologies in the ’90s? What did they measure? What other surprises did they find?

In 1999, the W3C cited this research in an official recommendation: “W3C Recommendations Reduce ‘World Wide Wait’”.

Tired of having to make coffee while you wait for a home page to download?

Advances in HTTP, CSS, and PNG advances (and others, including HTML compression at the link layer and range requests) have been shown to alleviate some of the traditional Web bottlenecks. To sum up:

  • Resolving the URI. Thanks to persistent connections, HTTP/1.1 requires fewer name resolutions for the same number of HTTP requests. Also, HTTP/1.1 can correctly cache and reuse URIs that have already been resolved.
  • Connecting to the Web Server. HTTP/1.1 reduces the number of slow TCP open requests. Improved caching mechanisms in HTTP/1.1 significantly reduce the number of packets necessary to verify whether a cached resource on a proxy server must be refreshed.
  • Requesting the Web Resource. Pipelining with buffering makes more efficient use of TCP packets. Pipelining also leads to faster validation of cached information and fewer TCP packets sent. By using CSS, authors can eliminate unnecessary images and reuse style sheets with several documents. PNG means better, often smaller, images.

Performance was beginning to shift, ever so slightly, from the server to the client.

Web 2.0

The Web saw a rush of new technology in the mid-to-late ’90s: JavaScript, CSS, Flash, DHTML, and more!54

As browser implementations drifted further apart, the Web Standards Project (WaSP) formed to hold browser makers accountable for supporting W3C standards.55

Meanwhile, cash-flush startups were cropping up across the Web as the dot-com bubble began to rise.

And in 1999, Darcy DiNucci coined the term “Web 2.0” in an article titled “Fragmented Future”.

The Web, as we know it now, is a fleeting thing. Web 1.0… The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come.
    The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop.
    Ironically, the defining trait of Web 2.0 will be that it won’t have any visible characteristics at all… The Web’s outward form—the hardware and software that we use to view it—will multiply. On the front end, the Web will fragment into countless permutations with different looks, behaviors, uses, and hardware hosts.

Now the first generation of Internet appliances—Web-ready cell phones and personal digital assistants (PDAs)—has begun to appear. And while these devices are still fairly primitive, they do offer some clues to the likely future of the breed.
    For designers, the first thing to notice is the different considerations concerning form that are already appearing. The kind of Web page you can display on a cell phone or Palm Pilot is a far cry from the kind you’d create for a computer monitor. The format is not only much smaller (think 2” of screen real estate instead of 17”), but onboard storage is either minimal or nonexistent, and keyboards for alphanumeric information entry are usually missing. In fact, the hardware will be different from device to device; compare the interface of the Palm Pilot with that of the GameBoy, for instance. Do you have a 20-pixel, 200-pixel, or 2000-pixel screen width? Pen entry, joystick, or touch screen? Each device’s input and output methods will demand different interface designs.
    Besides the hardware differences, designers will have to consider an ever-widening array of connection speed capabilities. Web pages meant to be viewed on full-size monitors or TV screens will soon be able to take advantage of high bandwidth connections such as cable modems and DSL connections. Mobile appliances such as PDAs rely on much slower connections: The two-way radio planned for the Palm VII, for instance, gets about 10 Kbps. While wireless speeds will likely see gains in the future, the chasm between wired and unwired speeds will likely remain wide, and both connection models will be important.

How right she was.

Up Next: The 2000s, 2010s, and 2020s

Unfortunately, I’ve run out of time!

If you’ve found this post interesting, or found errors I should correct, please let me know. You can email me@tannerhodges.com or find me on Mastodon @tannerhodges@mastodon.online or Twitter @atannerhodges.

There’s so much more story to tell, from High Performance Web Sites to Core Web Vitals and everything in between.

I’ll post an updated version on my website, tannerhodges.com, once I finish writing up the next 22 years.


Footnotes

Prologue

  1. I’ve shamelessly copied this prologue from Brian Christian’s excellent book, The Most Human Human, page 1. It’s so good! ↩︎
  2. From Betty Shannon, Unsung Mathematical Genius and Mary Shannon Obituary. Also, a few months before she met him, Betty’s soon-to-be husband, Claude, published an article titled “A Mathematical Theory of Communication”. It turned out to be kind of a big deal… https://thebitplayer.com/ ↩︎
  3. For more stories of human computers, see Remembering the Women of the Mathematical Tables Project and NASA: When the Computer Wore a Skirt. ↩︎

Introduction

  1. Yes, we’ve officially gone interstellar. ↩︎

World War II

  1. From When Computers Were Human by David Alan Grier, pages 267–268. ↩︎
  2. From When Computers Were Human by David Alan Grier, page 261. ↩︎
  3. ENIAC’s total cost was $486,804.22. From A History of Computer Technology by Michael Williams, page 272. That’s over $8,000,000 today! https://www.bls.gov/data/inflation_calculator.htm ↩︎
  4. From “Mechanical ‘Brain’ Has Its Troubles”, New York Times, December 14, 1947, page 49. ↩︎

1950s

  1. Quoting John W. Mauchly, co-creator of the ENIAC. From The Dream Machine by M. Mitchell Waldrop, page 59. ↩︎
  2. From The Dream Machine by M. Mitchell Waldrop, page 72. See von Neumann architecture. ↩︎
  3. If only we had more time! We’d talk about the IBM 650, George Forsythe, FORTRAN, and John McCarthy—but this post is already too long. (Oh, and don’t forget the PDP-1!) Re: the “Cambrian explosion”, you could argue another decade better fits that description, but in the words of Jay W. Forrester, “More happened in percentage improvements in digital computers from 1946, when they didn’t exist, to 1956, when they came into the modern era. I might not have envisioned how much smaller and faster they’d be, but the fundamental logic hasn’t changed.” ↩︎

1960s

  1. From The Dream Machine by M. Mitchell Waldrop, page 197. ↩︎
  2. ARPA’s “Command and Control Research” was renamed “Information Processing Techniques” sometime between 1962–1964. From The Advanced Research Projects Agency, 1958–1974, page VI-12. ↩︎
  3. From The Dream Machine by M. Mitchell Waldrop, page 204. ↩︎
  4. I can’t recommend enough The Dream Machine by M. Mitchell Waldrop. It follows the life of J. C. R. Licklider, and is the single best resource I’ve found on the history of computing. For a quick sample of Lick’s ideas, see Wikipedia, Man-Computer Symbiosis, and Intergalactic Computer Network. ↩︎
  5. Quoting Bob Fano, director of Project MAC. From The Dream Machine by M. Mitchell Waldrop, pages 225. ↩︎
  6. From The Dream Machine by M. Mitchell Waldrop, page 227. ↩︎
  7. It’s noted at the top of his response time paper (“International Business Machines Corporation, Poughkeepsie, New York”). Besides that, surprisingly little is written about Robert B. Miller. I’d like to know more about him. If anyone has more details, please share! ↩︎
  8. This is the earliest tool explicitly made for performance monitoring that I’ve found so far. If you know an earlier one, please share! ↩︎
  9. Alex Podelko’s excellent article “A Short History of Performance Engineering” marks IBM’s release of System Management Facilities as the beginning of performance engineering. SMF was added to OS/360 in Release 18, November 1969 (see timeline on Wikipedia and original release notes). For more performance engineering resources, check out Alex’s website. ↩︎
  10. From The Dream Machine by M. Mitchell Waldrop, Chapter 7: The Intergalactic Network. ↩︎
  11. From The Dream Machine by M. Mitchell Waldrop, page 277. ↩︎
  12. See Leonard Kleinrock’s “The Day the Infant Internet Uttered its First Words”. ↩︎

1970s

  1. The story is much more complex than that. For more details, just search for “xerox parc” or read The Dream Machine by M. Mitchell Waldrop, Chapter 8: Living in the Future. ↩︎
  2. Seriously. We’re not even going to talk about the microchip revolution because we don’t have time. Just think, in <30 years we went from supercomputers → mainframes → minicomputers → microcomputers. ↩︎
  3. “With this volume, the proceedings of the CPEUG are for the first time being made available in a form that is readily accessible not only to Federal agencies but to the general public as well.” From Computer Performance Evaluation, edited by Harold Joseph Highland, page iii. ↩︎
  4. “Indeed, the Defense Department was getting ready to make TCP/IP the standard for all its digital communications, and Cerf was beginning to plan the mammoth task of converting the Arpanet itself to TCP/IP.” From The Dream Machine by M. Mitchell Waldrop, page 401. ↩︎
  5. To hear a brief version of Dr. Connie U. Smith’s story, see CMG: Do we need Software Performance Engineering when we have the Cloud?, starting at 3:20. ↩︎
  6. For a history of Candle, see About Aubrey Chernick and Candle Corporation History. ↩︎

1980s

  1. “Arpanet itself had switched over to TCP/IP on January 1, 1983—an event that many would call the actual birth of the Internet.” From The Dream Machine by M. Mitchell Waldrop, page 433. ↩︎
  2. Basing this off of the noticeable rise in “software” books during the 1980s in Google’s Ngram Viewer. For example, see Software Metrics by Alan Perlis. ↩︎
  3. I haven’t found a copy of the original 1981 paper yet, but according to her 1982 paper, “Software Performance Engineering” and her more recent 2021 presentation, “Software Performance Engineering Education: What Topics Should be Covered?”, Connie coined the term “Software Performance Engineering” in December 1981 in a paper titled “Increasing Productivity by Software Performance Engineering”. ↩︎
  4. It’s surprisingly difficult to find information about Dr. Connie U. Smith. The best single place to find her work is her old website: http://www.perfeng.com/classic-site/cspubs.htm. You can find more of her work, including recent publications, by searching the ACM, CMG, and IEEE libraries. ↩︎
  5. From The Dream Machine by M. Mitchell Waldrop, page 435. ↩︎
  6. From NSF’s “A Brief History of NSF and the Internet”. ↩︎
  7. Quoting Steve Wolff: “A network isn’t something you can just buy; it’s a long-term, continuing expense. And government doesn’t do that well. Government runs by fad and fashion. Sooner or later funding for NSFnet was going to dry up, just as funding for the Arpanet was drying up… Out of necessity, we forced the regionals to become general-purpose network providers.” Quoting Vint Cerf: “Brilliant… The creation of those regional nets and the requirement that they become self-funding was the key to the evolution of the current Internet.” From The Dream Machine by M. Mitchell Waldrop, pages 438–440. ↩︎
  8. “Along the way the agencies had also begun to consolidate their various networks around TCP/IP. Why should each agency maintain a specialized system for its own researchers when the NSFnet was open to everybody? … ARPA officials started systematically decommissioning their venerable IMPs and transferring their users to the faster network. By 1990 the Arpanet was history.” From The Dream Machine by M. Mitchell Waldrop, page 438. ↩︎
  9. “In 1992 Congress passed a bill that formally allowed for-profit access providers to use the NSFnet backbone. And on April 30, 1995, the NSFnet ceased to exist… The Internet was at last on its own, and self-sustaining.” From The Dream Machine by M. Mitchell Waldrop, page 440. ↩︎
  10. All of these are real, by the way. If want to have a great time, just peruse the list of “high performance” books from the 1990s. Apparently, we hit peak performance in 1994. ↩︎

1990s

  1. There’s a whole other story simply dying to be told here. The steady rise of KPIs since 1990 is fascinating to me. Any ideas what started this trend? Please share! ↩︎
  2. From Weaving the Web, page 38. ↩︎
  3. From “The Origin of the IMG Tag” by Jay Hoffmann. ↩︎
  4. “The commercial release of Netscape Navigator 1.0 occurred on December 15, 1994.” From “Netscape CEO Barksdale’s Deposition In Microsoft Suit (Text)”. ↩︎
  5. From How the Internet Happened by Brian McCullough, page 30. Check out Brian’s podcast too! https://www.internethistorypodcast.com/ ↩︎
  6. From “The History of Internet Explorer” by Sandi Hardmeier. ↩︎
  7. From “Amazon.com announces 1996 Bestsellers”. “First true design book” quoted from its Amazon product page on December 26, 2022. According to the book’s website, it was published on July 1, 1996. ↩︎
  8. From Creating Killer Web Sites by David Siegel, page 29. ↩︎
  9. From Creating Killer Web Sites by David Siegel, page 23. ↩︎
  10. Seriously, there’s so much packed into this book it’s silly. Even tools for testing different screen sizes! But for these specific notes: The View Info Command (page 41), File Sizes on the Macintosh (page 48), Reducing File Size (page 56), Image Inflation (page 61), About Caching (page 62), Preloading Images (page 182). “Chapter 10: A Gallery” is a particularly interesting case study. It describes an early performance budget: “We decided to optimize this site for visitors with 28.8 Kbps modems and systems that display thousands of colors… Unfortunately, the total weight of all animated GIFs is just under 100K. After considering several options — including a redesign — I throw out one of the master images and use two photo layers, rather than three… Now the page is under 60K.” (pages 174–178). ↩︎
  11. From Creating Killer Web Sites by David Siegel, page 136. The other deadly sins are: ↩︎
    • Blank Line Typography (page 85)
    • Horizontal Rules (page 116)
    • Background Images That Interfere (page 131)
    • The Slow Load (page 136)
    • Illegal Use of the Third Dimension (page 170)
    • Aliasing, Dithering, and Halos (page 227)
    • Paralysis (page 250)
  12. “When you’re desperate to improve performance, however, you become much more creative about finding other parts of the Web that you can tune.” From Web Performance Tuning by Patrick Killelea, page ix. If only we had time, there’s so much to unpack in this book! ↩︎
  13. From Capacity Planning for Web Performance by Daniel A. Menascé and Virgilio A. F. Almeida, page 82. ↩︎
  14. From Keynote’s “Top 10 Discoveries About the Internet”. See also, “How Fast Is the Internet?” and “Keynote Systems: Products – Software”. ↩︎
  15. JavaScript was released as part of Navigator 2.0 in September 1995. CSS was first implemented by Internet Explorer 3 in August 1996. Flash came out in 1996. DHTML became popular in 1997. You could argue AJAX came out in 1999 too. ↩︎
  16. For more details on WaSP, see “A Short History of WaSP and Why Web Standards Matter”, “1998: Open Season with Mozilla, W3C’s DOM, and WaSP”, and Wikipedia: Web Standards Project. ↩︎