The Internet: It’ll get slower before it gets faster

For the 3 weeks I was in the UK recently I used a UMTS modem (i.e. like a 3G phone) to surf the web and do all my work. Going round to my friend Robin’s house, who also works in IT, he does all his surfing through a cable from his phone to his computer: i.e. also UMTS.

At least in the UK, this is extremely popular. Also in Asia it makes a lot of sense; they have excellent high-speed mobile phone networks there and all ones preconceptions about the Asians having the latest handset devices: I can confirm first-hand that they’re all true.

As we all know and have been experiencing since about 2000, more and more phones are going to get more powerful and have larger screens. Full browsers will (and do) run on them. They will also be UMTS devices.

And for those people who don’t surf via UMTS, nearly everyone I know surfs at home using WLAN. A lot of offices use WLAN too. And obviously all the surfing at airports, coffee houses, hotels, conferences etc. all goes on via WLAN.

UMTS and WLAN have high bandwidth, but they have extremely high latency compared with a cable connection. That means that although the bytes flow fast once they’ve started, it takes a long time for the first byte to arrive.

I am quite proud of the fact that when I designed the “Uboot Joe” software (Windows software which ran on the user’s PC, sat on the notification area by the clock, and communicated with Uboot) I took this into account. Every action you do with the Joe is at most one client-server round trip. For example to view all the thumbnails in a folder, there is a single request from Joe to the server like “get all data in folder_id” and the return structure is a) information about the folder, b) information on all the photos within the folder, and c) all the binary JPEG data for the thumbnails of all those images. You can try using the Uboot Joe on a UMTS link, and it works faster than any website.

Contrast this design with HTML. The first response from the server contains <img src=xx> tags and only once that has been received can the browser make the further requests necessary to retrieve the images. If the first bytes of every response take a long time to arrive, then the user experiences that “long time” twice before they see the data they requested; first to get the HTML page then second to get the images.

In fact it’s worse. If a page has 50 embedded images, it doesn’t open up 50 concurrent connections to the server (for good reason). Instead it opens e.g. 4 connections. Which means that e.g. image number 5 has to wait for the “long time” of fetching image 1 to complete. (Some sites try to get around this by having lots of servers with different names e.g. img341.domain.com and distributing the images over these servers.)

And it’s even worse than that. Even if the application only does one round-trip to the server, the underlying protocols might do more round-trips, for example firstly to contact the DNS server to get the IP address for the domain name used in the URL; and then secondly to request the data from the server.

In addition to this being a problem with UMTS and WLAN, one also has to take into account that the Internet is global. When I’m in Macau accessing European servers I get a round trip of about 300ms. So if one adds three “long times” to an otherwise extremely fast request—easily done—one has added a whole second on to the time the user has to wait. And Jakob Nielsen says that after 1 second in total, users start to lose focus on what they’re doing.

So to design applications in this age, one needs to be aware of the number of serial server round-trips (i.e. the number of times you need to ask the server for something, and only once it’s been delivered, must you ask the server for something else).

For example:

  1. An HTML page which contains an external CSS file, and this CSS file contains URLs to images.
  2. Pages with many images. The browser only requests a few files from the same server at once, so again the response to image number 1 must be finished before the request for image number 3 can begin.
  3. Javascript software which does multiple serial calls to the server, e.g. “get session token for username/password” then “get info to display on page for session token”.
  4. A form which submits data to a piece of software. The software does something but instead of returning a result page, returns a redirection command to a “real” result page. Often done to allow one to hit “refresh” safely on the result page, or make the URL of the result page look nicer.

GWT is excellent in this regard. It has the ability to download lots of those small icon-size images in one request (it makes one big image on the server and chops them up again on the client) and it makes you explicitly aware of the number of server round trips by forcing you to define interfaces for client-server interactions – as opposed to some automated scheme where you write code and the framework decides when to insert client-server round-trips. (Wicket makes client-server round trips easy with AjaxLink; my fear is it might be too easy, and one might do them too often, and lose the overview of how many are happening).

Pre-caching is a good idea too. E.g. if you are a photo viewer application, with a photo shown full screen with a “next” button, it makes sense to load the image on the “next” page even before the user’s clicked on it. That download won’t interfere with the rest of the activity the user is doing, as the bandwidth is not the bottleneck, just the time between starting the download and the bytes starting to arrive at the client. (Although one can’t download too much without the user noticing, as some people pay per MB!)

But the most important point, I think, is: these days, one must test ones web applications on a high latency connection. Generally speaking, historically I have tended to develop locally (everything installed on my laptop), or I develop in an office with a network cable and high-speed Internet and a link to the data center where the test server sits—and the office is in the same country as the server. Maybe this sounds strange, but I think one should develop web applications while using a UMTS card.

2 Responses to “The Internet: It’ll get slower before it gets faster”

  1. Robin Salih Says:

    Instead of physically using a UMTS card whilst developing your networked apps, you could perhaps use software to mimic the high latency?

  2. adrian Says:

    A great resource about latency and similar issues in websites: “High Performance Web Sites” by Steve Souders, ISBN 0596529309

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

For inserting HTML or XML please remember to use &lt; instead of <