Page-Load-Time and Page Utility

In the Web business world, it’s well understood that page-load metrics (e.g. time-to-paint, time-to-responsiveness, etc.) drive engagement. Multiple studies (internal and external) show that even slight increases in load-time can turn off a large percent of eyeballs, and contribute to users exiting a page or even just failing to check-out after filling their cart at a web store. As a result, it should come as no surprise that these metrics are studied to death and obsessed over by COOs trying to keep their business in the green.

Off the cuff, I always found these metrics to be somewhat unbelievable, and had never personally experienced this effect… until I spent time in Papua, doing my Internet browsing behind a 3 Mbps satellite link. Ever since, I’ve been wanting to write an article to summarize some of my main takeaways and musings on what this means for the Internet back home.

Fortunately, Slack was relatively easy to replace. Once I realized that it was completely DOA, I shot off an email to my active collaborators informing them that we’d be email-only during my time in the field. Personally, I strongly prefer email to Slack for most communication, and greatly enjoyed this shift. Unfortunately, my attempts to continue this trend since returning from the field have been unsuccessful.

Redirecting mobile users to “m. subdomains” (e.g. m.facebook.com) with a more mobile-friendly webpage has been a common approach for over a decade. While the majority of these changes are generally focused around the page layout (vertical orientation, touchscreen-optimization, etc.), these pages are also commonly designed to consume less bandwidth and load faster in low-bandwidth conditions. To this point, it’s a well-known trick among field researchers to manually redirect your computer’s browser to the mobile version of a page if the regular one is struggling to load.

I’d be remiss not to give credit where it’s due. The big Internet companies out there, specifically Google and Facebook, are known for putting a remarkable amount of engineering effort towards ensuring that their pages load as fast as possible in low-bandwidth conditions, and it shows. The difference between loading a Google search result, AMP page, or cached page, and loading the “real thing” from the origin, was incredibly noticeable, even when accounting for factors such as CDNs.

Though these companies put a ton of effort into quickly loading content, interactive work and “offline mode” webapps still struggled. Using Google Docs to collaborate with some co-authors back home, I found the experience to be flaky and unreliable enough that I simply copy-pasted the document to a text editor on my computer in the mid-morning, emailed my colleagues to let them know I had the “lock,” worked throughout the day, and pasted it back in the evening. While slightly cumbersome, the 9-hour time difference to Seattle meant this process was minimally disruptive, as I would simply wait for my co-authors to go to sleep before I started working.

Finally, the biggest surprise of all was YouTube. I firmly expected YouTube to be borderline unusable out in the field, but was incredibly surprised, and impressed, by how well it adapted to our field conditions. The video quality wasn’t exactly high-def, but YouTube was remarkably consistent with its stream performance, downgraded videos as appropriate, and tended to avoid a large amount of buffering.

Casual studies have found a correlation between how new an Internet protocol is and how much bandwidth it uses. This result even applies to the analog world before digital communication, going backwards from the telephone system to AM, FM, and Morse Code. Intuitively, this makes loads of sense: as the pipes got wider and faster, applications started using more and more bandwidth to do more and more interesting things. Interestingly, this rule also implies that under constrained conditions, the older the protocol is, the better it works!

While comically simple, I found this approach to be an excellent rule of thumb for predicting the success of a particular application, or finding a quick solution to relieve congestion. Changing from WebMail to SMTP, from responsively designed sites to plaintext HTML, or from streaming sites to regular downloads, all greatly improved our user experience.

After about a week or so of working in Papua, I started to notice another interesting trend: I barely spent any time on my usual “time wasting” sites (e.g. Reddit or Facebook), but my other Internet usage (e.g. doing research for various articles) was largely unchanged. Paying a bit closer attention, I was able to trace the “root cause” of this difference down to that famous page-load-time metric, and its role in interrupting habit-based behavior.

As it turns out, almost all of my dumb-Internet-surfing is done as an unconscious habit (embarrassing, I know). Get some work done, get a little bored, and all of a sudden I’ve tabbed over to a news site without even thinking about it. For this type of unconscious behavior, introducing a 5-second delay was pretty much kryptonite. I’d make a new tab, start loading a page, realize I was staring at a blank screen while it loaded, ask myself what I was doing, and then close out of the window, all before it ever displayed anything.

Even more interesting, I found that I didn’t have this problem at all with other, more intentional uses of the Internet. Search Google for some information (e.g. “smartphone penetration 2020”), wait a bit for the page to load, click on the right link, wait for it to load, no problem at all. The “what am I doing?” question had a clear and obvious answer, and I actually didn’t mind having to wait for useful and worthwhile information. I didn’t measure this, but over the course of an average day I probably wasted far less time waiting for pages to load than I normally do on those same stupid websites. Turns out, for me at least, the question “is this content worth waiting 5 seconds for” is an excellent way to filter quality content from junk.

Obviously, this result is extremely anecdotal and preliminary, but it raises a ton of questions I want to explore further. First and foremost, I’d like to write a browser plugin that artificially induces these delays, and see if the result still holds back here in bandwidth-rich Seattle, for me and/or other users.

If these results hold, they have fascinating implications for that fabled page-load-time metric. Page load time is generally considered to be immensely impactful on other metrics such as user engagement and conversion, and individual case-studies often present dramatic results. However, no one to my knowledge has studied the elasticity of the impact of page load time across different sites. Qualitatively, there are some obviously intuitive results (such as the time I panic-googled “are there bears in Scotland” while camping with 1X connectivity) but we could even do such an experiment quantitatively, by using the elasticity of page-load-time to gauge how “useless” a site is.

In addition to measuring a site’s “uselessness factor,” there are some interesting action-items to discuss here as well. On the surface, artificially inducing latency sounds like a stupid way to help break my Internet time-wasting habits, but makes perfect sense when viewed through the lens of habit and addiction psychology. In this context, identifying and interrupting “habit chains” is known to be an incredibly powerful tool towards creating sustainable behavioral change, and artificially introducing a break for reflection is a very powerful way to do this.

Going further, the strongest argument to be made for applying behavioral psychology to your own Internet usage is that the big companies already do this! It’s no secret in Silicon Valley that the major players literally hire addiction experts to optimize their websites for habit-forming behavior. I wouldn’t be surprised at all if my personal hunches here have already been heavily studied and locked up in some internal white paper: “users click on 50 more videos just as long as we keep the interaction time too short for them to question what they’re doing.” This approach makes solid business sense, and is also pretty in-line with how casinos operate, making a point to keep people comfortable and unaware of the passage of time as they play.

Finally, there’s almost a moral imperative to study this work for those of us who work in traditional indigenous or tribal communities. Many people in disconnected areas express fears and concerns about introducing the Internet into their community, and often these fears center around (a) disrupting community traditions and norms and (b) fears that their kids will be stuck on their phones all day. This very article could shine a particularly dark lens on someone like me, working to bring the Internet to rural Papua while simultaneously working to limit my own exposure. A good dealer never gets high on his own supply, right?

Obviously, life exists in shades of grey, but my point is that these concerns are certainly accurate and not unfounded. The Internet is increasingly addictive and surveillance-based, and I’ve noticed certain businesses taking an increasingly adversarial position towards their users, especially when their users are the product, not the customer. It’s tempting to take a hands-off approach towards valuing one website over another (this is a key principle of Net Neutrality) and simply say “let the users decide what’s best,” but I’d certainly like to see increased discussion about healthy Internet usage, in terms of ways to both quantify the situation and provide tools to combat problems as they arise.

Scroll to top