Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Tutorials

The role of the tutorials is to provide a platform for a more intensive scientific exchange amongst researchers interested in a particular topic and as a meeting point for the community. Tutorials complement the depth-oriented technical sessions by providing participants with broad overviews of emerging fields. A tutorial can be scheduled for 1.5 or 3 hours.



How to Perform a Proper Statistical Study Analysis? Where we Started and Where are we Now in Statistical Performance Assessment Approaches for Stochastic Optimization Algorithms?


Instructors

Tome Eftimov
Jožef Stefan Institute
Slovenia
 
Brief Bio
Tome Eftimov is a senior researcher at the Computer Systems Department at the Jožef Stefan Institute. He is a visiting assistant professor at the Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University, Skopje. He was a postdoctoral research fellow at Stanford University, USA, where he investigated biomedical relations outcomes by using AI methods. In addition, he was a research associate at the University of California, San Francisco, investigating AI methods for information extraction from electronic health records. He obtained his PhD in Information and Communication Technologies (2018). His research interests include statistical data analysis, metaheuristics, natural language processing, representation learning, meta-learning, and machine learning. He has presented his work as 81 conference articles, 50 journal articles, and one Springer book published 2022. He was selected in Stanford University's top 2% of influential scientists worldwide in all disciplines for AI contributions for 2022. The work related to Deep Statistical Comparison was presented as a tutorial (i.e. IJCCI 2018, IEEE SSCI 2019, GECCO 2020, 2021, 2022, 2024, PPSN 2020, 2022, IEEE CEC 2021, 2022, 2023) or as an invited lecture to several international conferences and universities. He is an organizer of several workshops related to AI at high-ranked international conferences. He is an Editor in Evolutionary Computation Journal and Associate Editor in Expert Systems with Applications He is involved in both national and European projects. Currently, he is coordinating bilateral projects with Sorbonne University, France (algorithm selection and configuration), Leibniz University Hannover, Germany (fair benchmarking for dynamic algorithm configuration), and the University of Banja Luka, Bosnia and Herzegovina (theoretical and machine learning approaches for graph data). He has previously coordinated national projects on representation learning for stochastic optimization algorithms (2022-2024) and robust statistical analysis for single-objective optimization (2019-2021), as well as an EFSA-funded project on natural language processing for food science (2021-2022).
Peter Korošec
Independent Researcher
Slovenia
 
Brief Bio
Peter Korošec received his Ph.D. degree from the Jožef Stefan Postgraduate School, Ljubljana, Slovenia, in 2007. Since 2002, he has been a researcher at the Jožef Stefan Institute, Ljubljana. He is presently a researcher at the Computer Systems Department and an associate professor at the Faculty of Mathematics, Natural Sciences and Information Technologies, University of Primorska, Koper. His current areas of research include meta-heuristic optimization and parallel/distributed computing.
Abstract

Nowadays, making a statistical comparison is the essential for comparing the results of a study made using state-of-the-art approaches. Many researchers have problems making a statistical comparison because statistical tools are relatively complex and there are many to chose from. The problem is in selecting the right statistic to apply as a specific performance measure. For example, researchers often report either the average or median without being aware that averaging is sensitive to outliers and both, the average and median, are sensitive to statistical insignificant differences in the data. Even reporting the standard deviation of the average needs to be made with care since large variances result from the presence of outliers. Furthermore, these statistics only describe the data and do not provide any additional information about the relations that exist between the data. For this, a statistical test needs to be applied. Additionally, the selection of a statistic can influence the outcome of a statistical test. This means that applying the appropriate statistical test requires knowledge of the necessary conditions about the data that must be met in order to apply it. This step is often omitted and researchers simply apply a statistical test, in most cases borrowed from a similar published study, which is inappropriate for their data set. This kind of misunderstanding is all too common in the research community and can be observed in many high-ranking journal papers. Even if the statistical test is the correct one, if the experimental design is flawed (e.g., comparison of results of tuned and non-tuned algorithms) their conclusions will be wrong. This is sometimes done on purpose to mislead the reader in believing that the author’s results are better than they actually are. The goal of the proposed tutorial is to provide researchers with knowledge of how to correctly make a statistical comparison of their data.



Keywords

statistical comparison
performance assessment
meta-heuristics
stochastic optimization algorithms


Aims and Learning Objectives

Many researchers have problems and difficulties in making a statistical analysis of their data, which they need to interpret correctly their results. Since many authors, reviewers and even some editors lack knowledge on this subject we come across many journal papers with conclusions based on flawed statistics. To become familiar with making a proper statistical comparison, we provide a tutorial on how to perform a statistical comparison by focusing on state-of-the-art approaches that provide robust statistical results. We provide specific case studies where a statistical comparison is made using single- and multi-objective stochastic optimization algorithms.

Target Audience

The target audiences are researchers (PhD students and senior researchers), who need to compare their results obtained using state-of-the-art approaches, which is nowadays a requirement for publishing in a scientific paper of merit.

Prerequisite Knowledge of Audience

Some basic terms form statistic, but it is not mandatory.

Detailed Outline

1. Introduction to statistical analysis.
2. Background on hypothesis testing, different statistical tests, the required conditions for their usage and sample size.
3. Typical mistakes, what one needs to be careful of, and understanding why making a statistical comparison of data needs to be done properly.
4. Understanding the difference between statistical and practical significance.
5. Understanding the affect that performance measures have on making a statistical comparison.
6. Defining single-problem and multiple-problem analysis.
7. Insight into pairwise comparison, multiple comparisons (all vs. all), and multiple comparisons with a control algorithm (one vs. all).
8. Standard approaches to making statistical comparisons and their deficiencies
9. Latest advances in making statistical comparisons e.g., Deep Statistical Comparison, which provides more robust statistical results in cases of outliers and statistically insignificant differences between data values.
10. Examples of all possible statistical scenarios in single-objective optimization and caveats.
11. Examples of all possible statistical scenarios in multi-objective optimization and caveats.
12. Presentation of a tool that automatizes and simplifies the whole process of making a statistical comparison.
13. Take home messages


Secretariat Contacts
e-mail: ijcci.secretariat@insticc.org

footer