## Iterative Optimization Heuristics Profiler

## Development Team

- Hao Wang,
*Leiden Institute of Advanced Computer Science*, - Diederick Vermetten,
*Leiden Institute of Advanced Computer Science*, - Furong Ye,
*Leiden Institute of Advanced Computer Science*, - Carola Doerr,
*CNRS and Sorbonne University*, - Ofer Shir
*Migal - The Galilee Research Institute, Tel-Hai College*, - Thomas BĂ¤ck,
*Leiden Institute of Advanced Computer Science*.

## Reference

When using IOHprofiler and parts thereof, please kindly cite this work as

Carola Doerr, Hao Wang, Furong Ye, Sander van Rijn, Thomas BĂ¤ck: *IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics*, arXiv e-prints:1810.05281, 2018.

```
@ARTICLE{IOHprofiler,
author = {Carola Doerr and Hao Wang and Furong Ye and Sander van Rijn and Thomas B{\"a}ck},
title = {{IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics}},
journal = {arXiv e-prints:1810.05281},
archivePrefix = "arXiv",
eprint = {1810.05281},
year = 2018,
month = oct,
keywords = {Computer Science - Neural and Evolutionary Computing},
url = {https://arxiv.org/abs/1810.05281}
}
```

## Acknowledgements

This work was supported by the Chinese scholarship council (CSC No. 201706310143), a public grant as part of the Investissement dâ€™avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH, in a joint call with the Gaspard Monge Program for optimization, operations research, and their interactions with data sciences, by Paris Ile-de-France Region, and by COST Action CA15140 â€śImproving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO)â€ť.

## License

This application is governed by the **BSD 3-Clause license**.

BSD 3-Clause License

Copyright © 2018, All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

# IOHanalyzer

This is the post-processing tool of the project **Iterative Optimization Heuristics Profiler** (IOHprofiler). This tool provides a web-based interface to analyze and visualization the benchmark data, collected from previous experiments. Importantly, we **do support** the widely used COCO data format (aka. Black-Box Optimization Benchmarking).

This tool is mainly built on R package Shiny, plotly and Rcpp. To use this tool, three options are available:

- Install the package directely from github using devtools (see R-package)
- A web-based service that you can use right away.

## Documentation

The details on the experimentation and post-processing tool can be found on arXiv.org.

## Using IOHprofiler as a R-package

To install the IOHProfiler R-package, please install R environment first. The binary file and installation manual for R can be found here https://cran.r-project.org/. Please start up the **R console**, which can be done (in case you're not familiar with R) by either executing command `R`

in your system terminal or open the R application.

The IOHanalyzer is now available on CRAN! To use the CRAN version, simply use the command in your R console:

```
install.packages('IOHanalyzer')
```

### Installing the developement version

The CRAN version is a stable version of the IOHanalyzer, but since this package is under continuous developement, it might make sense to use a developement version instead. The version in the master-branch of this repository is updated relatively frequently, and might be more usefull than the CRAN version. However, this version is likely to be more prone to errors. Please don't hesitate to add such errors to the issues section on this github page.

To use the developement version, 'devtools' package is needed to install the sorftware. To install this package, please copy-paste and execute the following command into the R console

```
install.packages('devtools')
```

Error messages will be shown in your R console if there is any installation issue. Now, the IOHanalyzer package can be installed and loaded using the following commands:

```
devtools::install_github('IOHprofiler/IOHanalyzer')
library('IOHanalyzer')
```

This will install the package and all required dependencies. The GUI can be acessed using the command:

```
runServer()
```

## Installation

Alternatively, you can clone the source-code directely.
This software is mainly written in **R**. To run it directly from the source code, please install R environment first. The binary file and installation manual for R can be found here https://cran.r-project.org/.

After R environment is correctly installed on you machine, several R packages are needed to execute the sorftware. Please start up the **R console**, which can be done (in case you're not familiar with R) by either executing command `R`

in your system terminal or open the R application. Once it is done, please copy-paste and execute the following commands into the R console to install all depedencies.

Error messages will be shown in your R console if there is any installation issue.

To allow for downloading of plots, orca[https://github.com/plotly/orca] and inkscape[https://inkscape.org/release/inkscape-0.92.4/] are needed.

Then, please clone (or downlaod) this repository into your own system. To clone the repository, please execute one of the the following commands in your **system console** (terminal):

```
> git clone git@github.com:IOHprofiler/IOHanalyzer.git
```

```
> git clone https://github.com/IOHprofiler/IOHanalyzer.git
```

To download, please click the green download button on this page.

To start the post-processing tool, please execute the following commands in the **R console**:

```
> shiny::runApp('/path/to/the/clone/folder')
```

## Online Service

Alternatively, we have built a server to put this tool online, which is currently hosted in Leiden Institute of Advanced Computer Science, Leiden University. The server can be accessed via http://iohprofiler.liacs.nl.

## Data Preparation

Data preparation is fairly easy for this tool. Just compress the data folder obtained from the experiment into a **zip** file and uploaded it. Currently, we support two data formats:

- IOHprofiler: our own csv-based format,
- COCO: data format of the COCO benchmark environment.

## Programing Interface

In addition to the graphical user interface, it is possible to directly call several procedures to analyze the data.

- To read and align all the data set in a folder, e.g., a COCO (BBOB) can be loaded as follows:

```
> dsList <- read_dir('/path/to/data/folder', format = 'COCO')
> dsList
DataSetList:
1: DataSet((1+1)-Cholesky-CMA on f1 2D)
2: DataSet((1+1)-Cholesky-CMA on f1 5D)
3: DataSet((1+1)-Cholesky-CMA on f1 10D)
4: DataSet((1+1)-Cholesky-CMA on f1 20D)
5: DataSet((1+1)-Cholesky-CMA on f10 2D)
6: DataSet((1+1)-Cholesky-CMA on f10 5D)
7: DataSet((1+1)-Cholesky-CMA on f10 10D)
8: DataSet((1+1)-Cholesky-CMA on f10 20D)
9: DataSet((1+1)-Cholesky-CMA on f11 2D)
10: DataSet((1+1)-Cholesky-CMA on f11 5D)
```

The return value is a list of **DataSets**. Each data set consists of:

**runtime samples**(aligned by target values),**function values samples**(aligned by runtime) and**endogenous parameter samples**of your optimization algorithm (aligned by target values).

- To get a general summary of one data set, you can use function
`summary`

:

```
> summary(dsList[[1]])
DataSet Object:
Source: COCO
Algorithm: (1+1)-Cholesky-CMA
Function ID: 1
Dimension: 2D
80 instance found: 1,2,3,4,5,6,7,...,73,74,75,76,77,78,79,80
runtime summary:
target mean median sd 2% 5% 10% 25% 50% 75% 90% 95% 98% ERT runs ps
1: 7.010819e+01 1.0000 1.0 0.0000000 1 1 1 1 1 1 1 1 1 1.0000 80 1.0000
2: 6.642132e+01 1.0125 1.0 0.1118034 1 1 1 1 1 1 1 1 1 1.0125 80 1.0000
3: 6.298712e+01 1.1125 1.0 0.8999824 1 1 1 1 1 1 1 1 1 1.1125 80 1.0000
4: 6.254396e+01 1.1375 1.0 0.9242684 1 1 1 1 1 1 1 1 2 1.1375 80 1.0000
5: 6.173052e+01 1.2000 1.0 1.1295793 1 1 1 1 1 1 1 1 3 1.2000 80 1.0000
---
1478: 9.473524e-10 182.6000 182.0 24.0894168 145 145 145 145 181 195 195 210 210 3039.6000 5 0.0625
1479: 2.759535e-10 192.0000 188.5 13.5892114 181 181 181 181 182 195 210 210 210 3799.5000 4 0.0500
1480: 2.463310e-10 195.6667 195.0 14.0118997 182 182 182 182 195 195 210 210 210 5066.0000 3 0.0375
1481: 5.223910e-11 196.0000 196.0 19.7989899 182 182 182 182 182 210 210 210 210 7599.0000 2 0.0250
1482: 1.638512e-11 210.0000 210.0 NA 210 210 210 210 210 210 210 210 210 15198.0000 1 0.0125
function value summary:
algId runtime runs mean median sd 2% 5% 10%
1: (1+1)-Cholesky-CMA 1 80 2.019164e+01 1.313629e+01 1.872005e+01 3.976009e-01 1.186758e+00 2.267215e+00
2: (1+1)-Cholesky-CMA 2 80 1.672518e+01 1.171157e+01 1.626487e+01 3.412261e-01 5.649713e-01 1.313165e+00
3: (1+1)-Cholesky-CMA 3 80 1.341813e+01 7.960940e+00 1.466877e+01 1.177243e-01 1.778454e-01 5.553242e-01
4: (1+1)-Cholesky-CMA 4 80 1.100825e+01 6.439678e+00 1.261937e+01 9.206390e-02 1.492565e-01 5.221506e-01
5: (1+1)-Cholesky-CMA 5 80 9.326633e+00 5.492333e+00 1.213908e+01 7.246194e-02 1.031687e-01 3.482309e-01
---
90: (1+1)-Cholesky-CMA 229 80 6.321111e-09 4.609703e-09 8.318463e-09 1.648124e-10 9.137825e-10 1.336938e-09
91: (1+1)-Cholesky-CMA 231 80 5.823366e-09 4.609703e-09 7.618201e-09 1.648124e-10 9.137825e-10 1.336938e-09
92: (1+1)-Cholesky-CMA 238 80 5.549184e-09 4.506106e-09 7.258216e-09 1.648124e-10 9.137825e-10 1.336938e-09
93: (1+1)-Cholesky-CMA 251 80 4.902827e-09 4.506106e-09 2.863671e-09 1.648124e-10 9.137825e-10 1.336938e-09
94: (1+1)-Cholesky-CMA 257 80 4.737548e-09 4.461953e-09 2.526087e-09 1.648124e-10 9.137825e-10 1.336938e-09
25% 50% 75% 90% 95% 98%
1: 5.166078e+00 1.313629e+01 2.922941e+01 5.004596e+01 5.895140e+01 6.442948e+01
2: 3.506037e+00 1.171157e+01 2.470745e+01 3.956328e+01 5.012620e+01 6.002245e+01
3: 2.786120e+00 7.960940e+00 2.103983e+01 3.251354e+01 4.030147e+01 5.690653e+01
4: 1.284869e+00 6.439678e+00 1.636467e+01 2.453239e+01 3.043178e+01 4.718666e+01
5: 9.886750e-01 5.492333e+00 1.317642e+01 2.216466e+01 2.913811e+01 4.718666e+01
---
90: 2.799292e-09 4.609703e-09 6.554838e-09 8.992768e-09 1.235938e-08 2.771700e-08
91: 2.799292e-09 4.609703e-09 6.321677e-09 8.916086e-09 9.326487e-09 1.785183e-08
92: 2.799292e-09 4.506106e-09 6.099065e-09 8.854852e-09 9.180278e-09 1.041480e-08
93: 2.799292e-09 4.506106e-09 6.099065e-09 8.659254e-09 8.982758e-09 9.377720e-09
94: 2.799292e-09 4.461953e-09 5.970504e-09 8.547357e-09 8.925389e-09 9.234526e-09
Attributes: names, class, funcId, DIM, Precision, algId, comment, datafile, instance, maxRT, finalFV, src, maximization
```

- To get a
**summary**of one data set at some function values / runtimes (e.g., the runtime distribution), you can use function`get_RT_summary`

(RunTime) and`get_FV_summary`

(FunctionValue):

```
> get_RT_summary(dsList[[1]], ftarget = 1e-1)
algId target mean median sd 2% 5% 10% 25% 50% 75% 90% 95% 98% ERT runs ps
1: (1+1)-Cholesky-CMA 0.09986529 36.55 37.5 17.11236 4 5 14 22 37 49 57 67 68 36.55 80 1
```

```
> get_FV_summary(dsList[[1]], runtime = 100)
algId runtime runs mean median sd 2% 5% 10% 25% 50% 75% 90% 95% 98%
1: (1+1)-Cholesky-CMA 100 80 0.0005886303 8.307195e-05 0.001165899 7.233939e-07 3.708489e-06 8.776148e-06 2.409768e-05 8.307195e-05 0.0004779061 0.001991386 0.002923357 0.004207976
```

- To get the
**sample**at some function values / runtimes, you can use function`get_RT_sample`

(RunTime) and`get_FV_sample`

(FunctionValue):

```
> get_RT_sample(dsList[[1]], ftarget = 1e-1, output = 'long')
algId target run RT
1: (1+1)-Cholesky-CMA 0.1 1 69
2: (1+1)-Cholesky-CMA 0.1 2 39
3: (1+1)-Cholesky-CMA 0.1 3 38
4: (1+1)-Cholesky-CMA 0.1 4 34
5: (1+1)-Cholesky-CMA 0.1 5 67
---
76: (1+1)-Cholesky-CMA 0.1 76 52
77: (1+1)-Cholesky-CMA 0.1 77 22
78: (1+1)-Cholesky-CMA 0.1 78 26
79: (1+1)-Cholesky-CMA 0.1 79 33
80: (1+1)-Cholesky-CMA 0.1 80 25
```

```
> get_FV_sample(dsList[[1]], runtime = 100, output = 'long')
algId runtime run f(x)
1: (1+1)-Cholesky-CMA 100 1 4.007000e-03
2: (1+1)-Cholesky-CMA 100 2 5.381801e-04
3: (1+1)-Cholesky-CMA 100 3 3.970844e-05
4: (1+1)-Cholesky-CMA 100 4 5.345724e-04
5: (1+1)-Cholesky-CMA 100 5 1.458869e-03
---
76: (1+1)-Cholesky-CMA 100 76 1.817090e-03
77: (1+1)-Cholesky-CMA 100 77 3.850201e-06
78: (1+1)-Cholesky-CMA 100 78 1.330411e-05
79: (1+1)-Cholesky-CMA 100 79 3.933669e-05
80: (1+1)-Cholesky-CMA 100 80 5.658113e-07
```

Or output a wide format…

```
> get_RT_sample(dsList[[1]], ftarget = 1e-1, output = 'wide')
algId target run.1 run.2 run.3 run.4 run.5 run.6 run.7 run.8 run.9 run.10 run.11 run.12 run.13 run.14 run.15
1: (1+1)-Cholesky-CMA 0.1 69 39 38 34 67 3 36 41 14 30 41 47 31 48 53
run.16 run.17 run.18 run.19 run.20 run.21 run.22 run.23 run.24 run.25 run.26 run.27 run.28 run.29 run.30 run.31 run.32
1: 8 19 18 57 16 28 51 53 22 47 53 17 5 48 13 63 45
run.33 run.34 run.35 run.36 run.37 run.38 run.39 run.40 run.41 run.42 run.43 run.44 run.45 run.46 run.47 run.48 run.49
1: 15 24 46 65 44 71 52 31 17 18 45 19 5 37 50 33 24
run.50 run.51 run.52 run.53 run.54 run.55 run.56 run.57 run.58 run.59 run.60 run.61 run.62 run.63 run.64 run.65 run.66
1: 40 49 55 48 50 33 20 33 35 49 37 4 22 68 57 19 44
run.67 run.68 run.69 run.70 run.71 run.72 run.73 run.74 run.75 run.76 run.77 run.78 run.79 run.80
1: 38 48 31 14 41 50 67 21 43 52 22 26 33 25
```

- It is also possible to generate some diagnostic plots (using
`ggplot2`

or`plotly`

) using the provided plotting functions. The available functions are in the following style: Plot.{RT/FV/PAR}.{Plot_type}

For more information on these functions, use the documentation available by executing the following type of command:

```
?Plot.RT.Histogram
```

## Contact

If you have any questions, comments, suggestions or pull requests, please don't hesitate contacting us IOHprofiler@liacs.leidenuniv.nl!

## Cite us

The development team is:

- Hao Wang,
*Leiden Institute of Advanced Computer Science*, - Diederick Vermetten,
*Leiden Institute of Advanced Computer Science*, - Carola Doerr,
*CNRS and Sorbonne University*, - Furong Ye,
*Leiden Institute of Advanced Computer Science*, - Sander van Rijn,
*Leiden Institute of Advanced Computer Science*, - Thomas BĂ¤ck,
*Leiden Institute of Advanced Computer Science*,

When using IOHprofiler and parts thereof, please kindly cite this work as

Carola Doerr, Hao Wang, Furong Ye, Sander van Rijn, Thomas BĂ¤ck: *IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics*, arXiv e-prints:1810.05281, 2018.

```
@ARTICLE{IOHprofiler,
author = {Carola Doerr and Hao Wang and Furong Ye and Sander van Rijn and Thomas B{\"a}ck},
title = {{IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics}},
journal = {arXiv e-prints:1810.05281},
archivePrefix = "arXiv",
eprint = {1810.05281},
year = 2018,
month = oct,
keywords = {Computer Science - Neural and Evolutionary Computing},
url = {https://arxiv.org/abs/1810.05281}
}
```

Upload Data

Load Data from Repository

Data Processing Prompt

List of Processed Data

Data Overview

This table provides an overview on function values for all algorithms chosen on the left:

- worst recorded: the worst \(f(x)\) value ever recorded across
*all iterations*, - worst reached: the worst \(f(x)\) value reached in the
*last iterations*, - best reached: the best \(f(x)\) value reached in the
*last iterations*, - mean reached: the mean \(f(x)\) value reached in the
*last iterations*, - median reached: the median \(f(x)\) value reached in the
*last iterations*, - succ: the number of runs which successfully hit the best reached \(f(x)\).

Runtime Statistics at Chosen Target Values

This table summarizes for each algorithm and each target value chosen on the left:

- runs: the number of runs that have found at least one solution of the required target quality \(f(x)\),
- mean: the average number of function evaluations needed to find a solution of function value at least \(f(x)\)
- median, \(2\%, 5\%,\ldots,98\%\) : the quantiles of these first-hitting times

When not all runs managed to find the target value, the statistics hold only for those runs that did. That is, the mean value is the mean of the successful runs. Same for the quantiles. An alternative version with simulated restarts is currently in preparation.

Original Runtime Samples

This table shows for each selected algorithm \(A\), each selected target value \(f(x)\), and each run \(r\) the number \(T(A,f(x),r)\) of evaluations performed by the algorithm until it evaluated for the first time a solution of quality at least \(f(x)\).

Expected Runtime (ERT): single function

The ** mean, median, standard deviation and ERT** of the runtime samples
are depicted against the best objective values.
The displayed elements (mean, median, standard deviations and ERT)
can be switched on and off by clicking on the legend on the right.
A

**tooltip**and

**toolbar**appears when hovering over the figure.

Expected Runtime (ERT): all functions

Expected Runtime Comparisons

### Histogram of Fixed-Target Runtimes

This histogram counts how many runs needed between
\(t\) and \(t+1\) function evaluations. The bins
\([t,t+1)\) are chosen automatically. The bin size is determined
by the so-called **Freedmanâ€“Diaconis rule**: \(\text{Bin size}=
2\frac{Q_3 - Q_1}{\sqrt[3]{n}}\), where \(Q_1, Q_3\) are the \(25\%\)
and \(75\%\) percentile of the runtime and \(n\) is the sample size.
The displayed algorithms can be selected by clicking on the legend on the right.
A **tooltip** and **toolbar** appears when hovering over the figure.

### Empirical Probability Mass Function of the Runtime

**Warning! **The
**probability mass function** of the runtime is approximated by the
treating the runtime as a *continuous* random variable and
applying the **kernel estimation** (KDE):

The plot shows the distribution of the first hitting
times of the individual runs (dots), and an estimated
distribution of the probability mass function.
The displayed algorithms can be selected by clicking on
the legend on the right. A **tooltip** and **toolbar**
appear when hovering over the figure. This also includes the
option to download the plot as png file. A csv file with the runtime
data can be downlowaded from the
Data Summary tab.

Empirical Cumulative Distribution: Single target

Each EDCF curve shows the proportion of the runs
that have found a solution of at least the required
target value within the budget given by the \(x\)-axis.
The displayed curves can be selected by clicking on the legend on the right. A **tooltip**
and **toolbar** appears when hovering over the figure.
This also includes the option to download the plot as png file.

Aggregated Empirical Cumulative Distribution: Single function

The evenly spaced target values are:

The fraction of (run,target value)
pairs \((i,v)\) satisfying that the best solution that the algorithm has
found in the \(i\)-th run within the given time budget \(t\) has quality at least
\(v\) is plotted against the available budget \(t\). The displayed elements can be switched
on and off by clicking on the legend on the right. A **tooltip**
and **toolbar** appears when hovering over the figure.

Aggregated Empirical Cumulative Distribution: All functions

The fraction of (run,target value, ...)
pairs \((i,v, ...)\) satisfying that the best solution that the algorithm has
found in the \(i\)-th (run of function \(f\) in dimension \(d\)) within
the given time budget \(t\) has quality at least \(v\) is plotted against
the available budget \(t\). The displayed elements can be switched
on and off by clicking on the legend on the right. A **tooltip**
and **toolbar** appears when hovering over the figure. Aggregation over
functions and dimension can be switched on or off using the checkboxes on
the left; when aggregation is off the selected function / dimension
is chosen according the the value in the bottom-left selection-box.

The selected targets are:

Area Under the ECDF

The **area under the ECDF** is
caculated for the sequence of target values specified on the left. The displayed
values are normalized against the maximal number of function evaluations for
each algorithm. Intuitively, the larger the area, the better the algorithm.
The displayed algorithms can be selected by clicking on the legend on the right.
A **tooltip** and **toolbar** appears when hovering over the figure.
This also includes the option to download the plot as png file.

Data Overview

Target Statistics at Chosen Budget Values

This table summarizes for each algorithm and each **budget** \(B\) chosen on the left:

- runs: the number of runs that have performed at least \(B\) evaluations,
- mean: the average best-so-far function value obtain within a budget of \(B\) evaluations,
- median, \(2\%, 5\%,\ldots,98\%\) : the quantiles of the best function values found within the first \(B\) evaluations.

When not all runs evaluated at least \(B\) search points, the statistics hold for the subset of runs that did. Alternative statistics using simulated restarted algorithms are in preparation.

Original Target Samples

Expected Target Value (per function)

The ** mean, median and standard deviation** of the best function values
found with a fixed-budget of evaluations are depicted against the budget.
The displayed elements can be switched on and off by clicking on the legend on the right.
A

**tooltip**and

**toolbar**appears when hovering over the figure.

Expected Function values: all functions

Expected Function Value comparisons

### Histogram of Fixed-Budget Targets

This histogram counts the number of runs whose best-so-far function
values within the first \(B\) evaluations is between \(v_i\) and
\(v_{i+1}\). The buckets \([v_i,v_{i+1})\) are chosen automatically
according to the so-called **Freedmanâ€“Diaconis rule**: \(\text{Bin size}=
2\frac{Q_3 - Q_1}{\sqrt[3]{n}}\), where \(Q_1, Q_3\) are the \(25\%\)
and \(75\%\) percentile of the runtime and \(n\) is the sample size.
The displayed algorithms can be selected by clicking on the legend on the right.
A **tooltip** and **toolbar** appears when hovering over the figure.

### Empirical Probability Density Function of Fixed-Budget Function Values

The plot shows, for the budget selected on the left, the distribution
of the best-so-far function values of the individual runs (dots), and an estimated distribution of the probability mass function.
The displayed algorithms can be selected by clicking on the legend on the right. A **tooltip** and **toolbar**
appear when hovering over the figure. A csv file with the runtime data can be downloaded from the
Data Summary tab.

Empirical Cumulative Distribution of the Fixed-Budget Values: Single Budgets

Each EDCF curve shows the proportion of the runs that have found
within the given budget B a solution of at least the required target
value given by the x-axis. The displayed curves can be selected
by clicking on the legend on the right. A **tooltip** and **toolbar**
appears when hovering over the figure.

Empirical Cumulative Distribution of the Fixed-Budget Values: Aggregation

The evenly spaced budget values are:

The fraction of (run,budget) pairs \((i,B)\) satisfying that the best
solution that the algorithm has found in the \(i\)-th run within the
first \(B\) evaluations has quality at **most** \(v\) is plotted
against the target value \(v\). The displayed elements can be switched
on and off by clicking on the legend on the right. A **tooltip** and
**toolbar** appears when hovering over the figure.

Area Under the ECDF

The **area under the ECDF** is
caculated for the sequence of budget values specified on the left. The displayed
values are normalized against the maximal target value recorded for
each algorithm. Intuitively, the **smaller** the area, the **better** the algorithm.
The displayed algorithms can be selected by clicking on the legend on the right.
A **tooltip** and **toolbar** appears when hovering over the figure.

Expected Parameter Value (per function)

The ** mean or median** of internal parameters of the algorithm
found with a fixed-budget of evaluations are depicted against the budget.
The displayed elements can be switched on and off by clicking on the legend on the right.
A

**tooltip**and

**toolbar**appears when hovering over the figure.

Parameter Statistics at Chosen Target Values

This table summarizes for each algorithm and each target value chosen on the left:

- runs: the number of runs where non-missing parameter values are observed, for each required target value \(f(x)\),
- mean: the average value of the specified
**parameter**when target value \(f(x)\) is hit. - median, \(2\%, 5\%,\ldots,98\%\) : the quantiles of these parameter values

When not all runs managed to find the target value, the statistics hold only for those runs that did. That is, the mean value is the mean of the successful runs. Same for the quantiles. An alternative version with simulated restarts is currently in preparation.

Parameter Sample at Chosen Target Values

This table shows for each selected algorithm \(A\), each selected target value \(f(x)\), and each run \(r\) the parameter value observed when the target value \(f(x)\) is reached for the first time.

Color Settings

Example of the current colorscheme.

General settings

Set the figure download properties

Set the figure fontsizes