1
April2023MICCAI
MICCAI 2023: 46 brilliant challenges were accepted! Details on challenge days, virtual challenges etc will be provided on the main MICCAI website soon.
According to a recent study on common practice in biomedical image analysis, reproduction and adequate interpretation of challenge results are often not possible because only a fraction of relevant information is typically reported. To address this issue, we have implemented this online platform for challenge proposal submissions with structured descriptions of challenge protocols. The parameters for challenge design have been developed in collaboration with multiple institutions worldwide.
"The one practice that can universally commended is the transparent and complete reporting of all facets of a study, allowing a critical reader to evaluate the work and fully understand its strengths and limitations." Nature Neuroscience 2017, https://doi.org/10.1038/nn.4500
Challenges results will be interpretable.
The structured challenge submission systems will make challenges reproducible.
The standardized form makes it easy to include all important information without overseeing important aspects.
Altogether, your challenge will be of high quality, putting all aspects together!
Currently, the submission system is used for the following MICCAI 2023 conference. In the past, it was used for the following conferences:
![]()
MICCAI 2023
Timeline
- Website opens for submissions: 1 November 2022
- Proposal submission deadline: 8 December 2022 (23:59 GMT)
- Review deadline: 20 January 2023 (23:59 GMT)
- First round of feedback: 27 January 2023
- Rebuttal deadline: 17 February 2023 (23:59 GMT)
- Final feedback: 3 March 2023
The number of biomedical image analysis challenges organized per year is steadily increasing. Recent research, however, revealed that common practice related to challenge reporting does not allow for adequate interpretation and reproducibility of results. To address the discrepancy between the impact of challenges and the quality (control), the Biomedical Image Analysis ChallengeS (BIAS) initiative developed a set of recommendations for the reporting of challenges. The BIAS statement aims to improve the transparency of the reporting of a biomedical image analysis challenge regardless of field of application, image modality or task category assessed.
The parameter list which is used to create challenge proposals was developed by the BIAS initiative that created a guideline on how to report and design challenges in a structured manner. The forms necessary to propose a (MICCAI endorsed) challenge can be downloaded below. For details, please check Overview > MICCAI endorsed events or the Help section.
The Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network is a global initiative with the aim of improving the quality of research publications and research itself. A key mission in this context is to achieve accurate, complete and transparent reporting of health research studies to support reproducibility and usefulness. Between 2006 and 2019, more than 400 reporting guidelines have been published under the umbrella of the equator network. A well-known guideline is the CONSORT statement developed for reporting of randomized controlled trials. Prominent journals, such as Lancet, JAMA or the British Medical Journal require the CONSORT checklist to be submitted along with the actual paper when reporting results of a randomized controlled trial.
The BIAS guideline was registered with the equator network: BIAS: Transparent repoting of biomedical image analysis challenges
Several challenges have been proposed in the structured challenge submission system for multiple conferences. In this section, we will show statistics based on all (accepted) challenges in the system for MICCAI, ISBI and MIDL conferences.
Organizing a challenge based on an accepted challenge proposal may be time-consuming, especially when large-scale data annotation is necessary. To address the traditionally tight schedule between MICCAI challenge acceptance and organization, we offered an early review of MICCAI 2024 challenge proposals in the call for MICCAI 2023 challenges. Organizers interested to organize a challenge in 2024 were able to receive an early acceptance that leaves time for careful preparation of the challenge. Only convincing challenge proposals were accepted in this round. Other challenges were asked to resubmit in the next call for challenges.
You wish to organize a MICCAI endorsed challenge? Please fill the form and send it to MICCAI-board-challenge-WG(at)dkfz-heidelberg.de.
You want to organize a great challenge but are afraid that it might get rejected? You are new to the field of challenges and want to avoid common pitfalls? We collected points that will help you get your challenge accepted!
Validation studies such as challenges need to be reproducible and interpretable. To achieve this, we recommend you to report all important challenge design parameters. Below, you can download the checklist for challenge design parameters which can directly be filled out.
You wish to organize a MICCAI endorsed challenge? Please fill the form and send it to MICCAI-board-challenge-WG(at)dkfz-heidelberg.de.
The dataset is one of the main components of your challenge. The rankings rely on the dataset as well as the general performance of the participating algorithms. In machine learning, you should try to acquire as much data as possible to avoid poor approximation of the algorithms. Of course, this is tough in the medical domain, but challenges with only very few cases may not attract a lot of researchers which therefore may lead to rejection.
The dataset should reflect the natural variability of the imaged objects. In the best case, it was acquired from multiple sources. The split of the dataset into training and test is very important for the algorithm performances. Be aware that imbalanced training data may lead to worse results. Furthermore, make sure that training and test data share similar distributions, but the test data should also be sufficient to test the generalization capabilities of algorithms.
The metrics should be well chosen. Don't use a metric because many other researcher are using it without thinking it through. For example, the Dice Similarity metric is very popular in segmentation tasks, but it is not well-suited for small objects to be segmented. Define the goal of your challenge, define assessment aims and find the metrics that fit best. If you are unsure, here are some reads to consider:
Organizing a challenge also includes the analysis of results. The algorithm performance should be validated in a thoughtful manner. Is the ranking robust against small perturbations? What would happen to the ranking if you slightly change the test dataset? What would happen if you combine some of the submitted methods? Statistics is important. Don't leave it out. Check out the following links for details:
Why do you want to run your challenge? What is your motivation? Are you trying to find algorithms that exceed the current state of the art? Do you try to find solutions for an ill-posed problem? These questions should be pondered thoroughly from beginning.
Challenges are validation studies. As such, the design should be completely transparent and design choices should be justified and comprehensible. In the past, challenges were rejected because some design choices were not justified at all or not reproducible. We will present some examples of the most unclear parameters of rejected challenges. As described above, the dataset is highly important. It should be absolutely clear why you chose a specific training/test split. Next, the annotation process is as important as the data. Questions such as: 'Who annotated the data?' 'What was the experience level of annotaters?' 'Did multiple annotators annotate the data?' should be clearly answered as well as a link to the labeling protocol should be provided. Make sure to justify why you chose specific performance metrics. The metric should reflect your challenge goal (see below) and should assess valid properties. Be aware of mutually dependent metrics. The ranking computation should be clear, in the best case, write it in a pseudoalgorithmic form. Make clear why this computation was chosen. How does the submission of results work for the participants? Are they required to generate docker containers? Upload results? Upload code? All these questions are important to know.
Finally, take your best shot when filling out the submission form. We know, it's tough, there are many parameters. But all of them matter. The reviewers will know immediately when you filled the parameters out in a rush. You don't submit a paper full of typos, grammatical errors and incomplete sentences, right? So you also don't want to submit a challenge proposal in such a way.
The team is associated with the Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ) Heidelberg.