PPSN 2020 Workshop - Good Benchmarking Practices for Evolutionary Computation

BENCHMARK 2020


Artificial Intelligence



Workshop Description
----------------------------
Brace yourself for highly interactive workshop, with plenty of room for discussions and interaction. This is not just another mini-conference, but a platform to come together and to discuss recent progress and challenges in the area of benchmarking iterative optimization heuristics.
In the era of explainable and interpretable AI, it is increasingly necessary to develop a deep understanding of how algorithms work and how new algorithms compare to existing ones, both in terms of strengths and weaknesses. For this reason, benchmarking plays a vital role for understanding algorithms’ behavior. Even though benchmarking is a highly-researched topic within the evolutionary computation community, there are still a number of open questions and challenges that should be explored:
(i) most commonly-used benchmarks are too small and cover only a part of the problem space,
(ii) benchmarks lack the complexity of real-world problems, making it difficult to transfer the learned knowledge to work in practice,
(iii) we need to develop proper statistical analysis techniques that can be applied depending on the nature of the data,
(iv) we need to develop user-friendly, openly accessible benchmarking software. This enables a culture of sharing resources to ensure reproducibility, and which helps to avoid common pitfalls in benchmarking optimization techniques. As such, we need to establish new standards for benchmarking in evolutionary computation research so we can objectively compare novel algorithms and fully demonstrate where they excel and where they can be improved.
The topics of interest for this workshop include, but are not limited to:
Performance measures for comparing algorithms behavior;
Novel statistical approaches for analyzing empirical data;
Selection of meaningful benchmark problems;
Landscape analysis;
Data mining approaches for understanding algorithm behavior;
Transfer learning from benchmark experiences to real-world problems;
Benchmarking tools for executing experiments and analysis of experimental results.
The schedule will be designed to encourage a high level of interactivity — expect a real workshop rather than (yet another) mini-conference!
Submissions
---------------------------------------
We particularly welcome position statements addressing or identifying open challenges in benchmarking optimization techniques. We also suggest topics for in-depth discussions. Please also consider to suggest alternative discussion formats - we want this to be a real workshop, not yet another mini-conference!
Please send your suggestions for presentations and/or discussions by e-mail to all five main responsible organizers that are listed below:
Carola Doerr (Carola.Doerr@mpi-inf.mpg.de)
Tome Eftimov (tome.eftimov@ijs.si)
Pascal Kerschke (kerschke@uni-muenster.de)
Pietro S. Oliveto (p.oliveto@sheffield.ac.uk)
Mike Preuss (m.preuss@liacs.leidenuniv.nl)
There are no format requirements. You can submit your ideas by simple mail, or as PDF. Please indicate the format of your suggested contribution (talk, discussion, breakout, brainstorming, etc.) and how much time you suggest for this activity.
Please note that PPSN workshop papers are not published in the conference proceedings. However, if you want your contribution to be listed in the conference proceedings, please submit it to us before June 8, AoE. Notification of acceptance will be on June 15. PPSN early registration deadline is on June 22.
If you do not care about being listed in the conference proceedings, you can send us your ideas/contributions/position papers/suggested activities/... any time, ideally before July 31, 2020.
Organizers
------------------------------------------------------
Thomas Bäck (Leiden University, The Netherlands)
Thomas Bartz-Beielstein (TH Cologne, Germany)
Jakob Bossek (The University of Adelaide, Adelaide, Australia)
Bilel Derbel (University of Lille, Lille, France)
Carola Doerr (CNRS researcher at Sorbonne University, Paris, France)
Tome Eftimov (Jožef Stefan Institute, Ljubljana, Slovenia)
Pascal Kerschke (University of Münster, Germany)
William La Cava (University of Pennsylvania, USA)
Arnaud Liefooghe (University of Lille, France)
Manuel López-Ibáñez (University of Manchester, UK)
Katherine Malan (University of South Africa)
Boris Naujoks (TH Cologne, Germany)
Pietro S. Oliveto (University of Sheffield, UK)
Patryk Orzechowski (University of Pennsylvania, USA)
Mike Preuss (Leiden University, The Netherlands)
Jérémy Rapin (Facebook AI Research, Paris, France)
Ofer M. Shir (Tel-Hai College and Migal Institute, Israel)
Olivier Teytaud (Facebook AI Research, Paris, France)
Heike Trautmann (University of Münster, Germany)
Ryan J. Urbanowicz (University of Pennsylvania, USA)
Vanessa Volz (modl.ai, Copenhagen, Denmark)
Markus Wagner (The University of Adelaide, Australia)
Hao Wang (LIACS, Leiden University, The Netherlands)
Thomas Weise (Institute of Applied Optimization, Hefei University, Hefei, China)
Borys Wróbel (Adam Mickiewicz University, Poland)
Aleš Zamuda (University of Maribor, Slovenia)