Benchmark JavaScript benchmarking

Version

use the source, young padawan!

Synopsis

// "Basic" usage
new Benchmark(
  { // functions to be benchmarked
    foo: function(){ return 1+1 },
    bar: function(){ return 3-1 }
  }
);

// "Advanced" usage
new Benchmark(
  { // functions to be benchmarked
    upper: function(){ return this.TEST.toUpperCase() },
    LOWER: function(){ return this.TEST.toLowerCase() }
  },
  { // options
    iterations:  -2, // run each function for at least 2 seconds
    nATime: {
      upper: 1000, // run "upper" in chunks of 1000
      LOWER: 0     // automatically determine chunk value for "LOWER"
    },
    cooldown:    1000, // 1 second delay before resuming testing
    runCaps:     4000, // maximum 4 seconds for a continuous run
    responders:  'Perlish', // short-name of the 'responders' suite
    beforeStart: function() {
      // populate 'TEST' with the tested function's name
      this.TEST = this.current
    },
  }
);

Examples

String#split
very basic example, yet quite useful as template for creating new benchmarks
Element#setStyle
good for checking out how to deal with benchmarked functions' parasitic existence (and for using Benchmark handlers)
String#escapeHTML
demonstration on how parasitic Benchmark really is (see how "TEST" is applied?)

Need more eye candy? See Benchmarker examples.

Description

This module started as JavaScript port of Perl's Benchmark.pm. Its default output tries to be similar to its Perl cousin, but its internals evolved in a somehow different animal, more adapted for living in a web browser environment.

Interface

  new Benchmark(functions, options);

Upon instantiating a Benchmark, it will automatically start benchmarking each provided function, triggering events along the way.

Reusing Benchmark objects is possible, using the run method like in the following example:

  // instantiate *and* run benchmark
  var bm = new Benchmark({
    foo: function(){...},
    bar: function(){...}
  });
  // re-run benchmark for all methods ("foo" and "bar")
  bm.run();
  // re-run benchmark for method "foo" only
  bm.run('foo');

This object operates in liberal mode, meaning that ALL options passed to the constructor will parasite it, no questions asked. Even more, all handlers (including "responders") are executed in current Benchmark object's context, so using this in such a handler is appropriate.

The functions to be benchmarked are to be provided as first argument. For example:

  new Benchmark({
    name_1: function() { /* function 1 code */},
    name_2: function() { /* function 2 code */}
  });

No option is compulsory, sensible defaults are already in place. The currently used options are:

iterations {Number} [-0.2]
if positive: The number of iterations to run each method
if negative: The minimum number of seconds to run each method
if null: Use default (which is -0.1)
nATime {Integer | Object} [0]
The number of times a function is executed in a row (tight-loop). A better name could be “chunkSize” ;-)
If 0 (default + recommended), Benchmark will to its best trying to automatically determine the best value for each to-be-tested function. Leaving it to 0 will make spikes control behave better.
If Object, Benchmark will consider it being a "Hash" containing particular nATime values for each named function.
spikes {Number} [5]
The maximum threshold between "worst" and "best", deciding if current chunk's results are to be ignored:
if (worst > spikes * best) ignore()

A value less-or-equal than 1 would disable error control.

Please notice that using a low enough value will make benchmark run for a possibly very long time until it will get a chance to obtain only “good” results (lowering nATime could help).

cooldown {Integer} [200]
The number of milliseconds of pause before resuming testing.
runCaps {Integer} [3000]
The maximum number of milliseconds to run a test sequence before triggering a delay (used to work around current browsers script running caps, e.g. Gecko's dom.max_script_run_time or Internet Explorer's script time-out).
responders {String} ['Perlish']
The short-name of the Benchmark.Responders "class".
atInit {Function} [function(){}]
This function will be called after Benchmark object initialization.
atStart {Function} [function(){}]
This function will be called before starting to benchmark a function.
beforeTest {Function} [function(){}]
This function will be called before each test iteration.
afterTest {Function} [function(){}]
This function will be called after each test iteration.
atFinish {Function} [function(){}]
This function will be called after finishing benchmarking a function.
atEnd {Function} [function(){}]
This function will be called after all benchmarking is complete.

Benchmark.Responders

After each step, Benchmark tries to dispatch an event handler. These events handlers are:

onInit
called after object's initialization;
onStart
called before starting to benchmark a function;
onIterate
called after finishing a benchmark iteration;
onPause
called when interrupting the benchmarking process;
onResume
called when resuming the benchmarking process;
onFinish
called after finishing benchmarking a function;
onComplete
called after benchmarking for all functions is complete.

Each such handler is called in context of the current Benchmark object, possibly receiving a numeric argument describing the current position inside the data structure.

Each responders suite should reside in a Benchmark.Responders.<SuiteName> object, all its methods being applied to the Benchmark object upon initialization.

Ready-made Responders Suites

Hacking

May I just say: Use the source, Luke? ;-)

OK, here are a few hints:

Dependencies

None.

Incompatibilities

None reported.

Bugs and Limitations

No bugs have been reported.

Obviously, it won't be able to benchmark functions that take more than browser's script running caps for nATime function executions.

Note that erratic results may occur, given the nondeterministic nature of the underlying testbed (a browser, a computer, etc). I'm still researching, some error control system would be a very useful addition to this library.

To Do

See Also

Obviously, you may find many javascript benchmarking tools around the web. Yet, I could not find anyone implementing it like in Perl.

Credits

Authors

Marius Feraru, <altblue@n0i.net>.

License and Copyright

© 2006-2007 Marius Feraru <altblue@n0i.net>. All rights reserved.

This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself.

Disclaimer of Warranty

Because this software is licensed free of charge, there is no warranty for the software, to the extent permitted by applicable law. Except when otherwise stated in writing the copyright holders and/or other parties provide the software "as is" without warranty of any kind, either expressed or implied, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The entire risk as to the quality and performance of the software is with you. should the software prove defective, you assume the cost of all necessary servicing, repair, or correction.

In no event unless required by applicable law or agreed to in writing will any copyright holder, or any other party who may modify and/or redistribute the software as permitted by the above license, be liable to you for damages, including any general, special, incidental, or consequential damages arising out of the use or inability to use the software (including but not limited to loss of data or data being rendered inaccurate or losses sustained by you or third parties or a failure of the software to operate with any other software), even if such holder or other party has been advised of the possibility of such damages.