criterion performance measurements

overview

want to understand this report?

{{#report}}

{{name}}

lower bound estimate upper bound
OLS regression xxx xxx xxx
R² goodness-of-fit xxx xxx xxx
Mean execution time {{anMeanLowerBound}} {{anMean.estPoint}} {{anMeanUpperBound}}
Standard deviation {{anStdDevLowerBound}} {{anStdDev.estPoint}} {{anStdDevUpperBound}}

Outlying measurements have {{anOutlierVar.ovDesc}} ({{anOutlierVar.ovFraction}}%) effect on estimated standard deviation.

{{/report}}

understanding this report

In this report, each function benchmarked by criterion is assigned a section of its own. The charts in each section are active; if you hover your mouse over data points and annotations, you will see more details.

Under the charts is a small table. The first two rows are the results of a linear regression run on the measurements displayed in the right-hand chart.

We use a statistical technique called the bootstrap to provide confidence intervals on our estimates. The bootstrap-derived upper and lower bounds on estimates let you see how accurate we believe those estimates to be. (Hover the mouse over the table headers to see the confidence levels.)

A noisy benchmarking environment can cause some or many measurements to fall far from the mean. These outlying measurements can have a significant inflationary effect on the estimate of the standard deviation. We calculate and display an estimate of the extent to which the standard deviation has been inflated by outliers.