If you're curious about the time it takes for your page to load and were wondering if there was something that could tell you what's going on besides human observation, then this excerpt from Even Faster Web Sites will get you pointed in the right direction.
The easiest, most straightforward, and probably least precise way to measure latency is via human observation; simply use the application on your target platforms and ensure that performance is adequate. Since ensuring adequate human interface performance is only about pleasing humans, this is actually a fine way to perform such measurements (obviously, few humans will be able to quantify delays reliably in terms of precise whole or fractional second measurements; falling back to coarser categorizations such as “snappy,” “sluggish,” “adequate,” and so on does the job).
Manual code instrumentation is really straightforward. Let’s say you have an event handler registered on your page, as in:
A simple way to add manual instrumentation would be to locate the
and add timing to the function:
The preceding code will produce a pop-up dialog that displays the execution time; one millisecond represents 1/1,000 of a second, so 100 milliseconds represent the 0.1-second “snappiness” threshold mentioned earlier.
Many browsers offer a built-in instance named
console that provides a
log() function (Firefox makes this available
with the popular Firebug plug-in); we greatly prefer
There are tools to perform an automated measurement of code execution time, but such tools are typically used for a different purpose. Instead of being used to determine the precise execution duration of a function, such tools—called profilers—are usually used to determine the relative amount of time spent executing a set of functions; that is, they are used to find the bottleneck or slowest-running chunks of code.
While you might think these and the other temporal-related columns represent a precise measurement of function execution time, it turns out that profilers are subject to something like the observer effect in physics: the act of observing the performance of code modifies the performance of the code.
Profilers can take two basic strategies representing a basic trade-off: either they can intrude on the code being measured by adding special code to collect performance statistics (basically automating the creation of code as in the previous listing), or they can passively monitor the runtime by checking what piece of code is being executed at a particular moment in time. Of these two approaches, the latter does less to distort the performance of the code being profiled, but at the cost of lower-quality data.
Firebug subjects results to a further distortion because its profiler executes inside Firefox’s own process, which creates the potential for it to rob the code it is measuring of performance.
Nevertheless, the “Percent” column of Firebug’s output demonstrates the power of measuring relative execution time: you can perform a high-level task in your page’s interface (e.g., click the Send button) and then check Firebug’s profiler to see which functions spent the most time executing, and focus your optimization efforts on those.
The lesson is simple: don’t introduce potentially long-running, poorly performing code into your web page.
Learn more about this topic from Even Faster Web Sites.
This book contains six guest chapters contributed by Dion Almaer, Doug Crockford, Ben Galbraith, Tony Gentilcore, Dylan Schiemann, Stoyan Stefanov, Nicole Sullivan, and Nicholas C. Zakas.