CEC-2021 Tutorial: Benchmarking and Experimentation
Benchmarking@CEC-2021 Tutorial “Benchmarking and Experimentation: Pitfalls and Best Practices” Time: Monday, June 28th, 2021 2:15 PM CEST Presenters: Thomas Bartz-Beielstein, Boris Naujoks, Mike Preuss
Experimentation is arguably the most important method for the tremendous advances in computer science and artificial intelligence at the moment, and this is also true for evolutionary computation. Whereas in other sciences, experimentation is well structured and interacts closely with theory, this interaction is much weaker in optimization, and more general, in AI research.
In a first wave of improvements, the authors had helped to set up the currently applied experimental methodology around 15 years ago. Nowadays, many of the questions around structured experimentation are revived due to the current AI hype, where partly the same problems are tackled again. However, there are also new problems that need to be handled, namely the ones of replicability and reproducibility. At the same time, the amount of available benchmarking environments and competitions has increased enormously.
In our tutorial, we bring together all the questions and problems around benchmarking and experimentation and provide an overview over the current state-of-the-art. Our intention is to provide practical guidelines and hints in order to circumvent a lot of pitfalls that can occur when working experimentally, especially with benchmarks.
More. Tutorial T17: Benchmarking and Experimentation: Pitfalls and Best Practices
Room: Room 4, Instructor: Thomas Bartz-Beielstein, Boris Naujoks, Mike Preuss, see: https://cec2021.mini.pw.edu.pl/upload/program_html/IEEE-CEC-2021-Program__-27.06.2021.html
References.
Thomas Bartz-Beielstein’s presentation is based on the preprint: T. Bartz-Beielstein, F. Rehbach, A. Sen, and M. Zaefferer, Surrogate Model Based Hyperparameter Tuning for Deep Learning with SPOT, 2021. The preprint is available from: https://www.spotseven.de/new-publications/
Juni 2021