SYNOPSIS

     use Benchmark::Command;
    
     Benchmark::Command::run(100, {
         perl        => [qw/perl -e1/],
         "bash+true" => [qw/bash -c true/],
         ruby        => [qw/ruby -e1/],
         python      => [qw/python -c1],
         nodejs      => [qw/nodejs -e 1/],
     });

    Sample output:

                          Rate      nodejs      python        ruby bash+true   perl
     nodejs    40.761+-0.063/s          --      -55.3%      -57.1%    -84.8% -91.7%
     python        91.1+-1.3/s 123.6+-3.3%          --       -4.0%    -66.0% -81.5%
     ruby         94.92+-0.7/s 132.9+-1.8%   4.2+-1.7%          --    -64.6% -80.8%
     bash+true   267.94+-0.7/s   557.3+-2%   194+-4.4% 182.3+-2.2%        -- -45.7%
     perl         493.8+-5.1/s   1112+-13% 441.9+-9.7% 420.3+-6.6%  84.3+-2%     --
    
     Average times:
       perl     :     2.0251ms
       bash+true:     3.7322ms
       ruby     :    10.5352ms
       python   :    10.9769ms
       nodejs   :    24.5333ms

DESCRIPTION

    This module provides run(), a convenience routine to benchmark
    commands. This module is similar to Benchmark::Apps except: 1) commands
    will be executed without shell (using the system {$_[0]} @_ syntax); 2)
    Benchmark::Dumb is used as the backend. This module is suitable for
    benchmarking commands that completes in a short time, like the above
    example.

FUNCTIONS

 run($count, \%cmds)

    Do some checks and convert %cmds (which is a hash of names and command
    arrayrefs (e.g. {perl=>["perl", "-e1"], nodejs=>["nodejs", "-e", 1]})
    into %subs (which is a hash of names and coderefs (e.g.: {perl=>sub
    {system {"perl"} "perl", "-e1"}, nodejs=>sub {system {"nodejs"}
    "nodejs", "-e", 1}}).

    The checks done are: each command must be an arrayref (to be executed
    without invoking shell) and the program (first element of each
    arrayref) must exist.

    Then run Benchmark::Dumb's cmpthese($count, \%subs). Usually, $count
    can be set to 0 but for the above example where the commands end in a
    short time (in the order milliseconds), I set to to around 100.

    Then also show the average run times for each command.

 SEE ALSO

    Benchmark::Apps

    1;

