NEWS - January 2024 We developed a completely new version of the structured challenge submission system with modern, resilient technologies. It now offers a robust, scalable, and user-friendly platform for the standardized and high-quality submission of challenges!
Structured Challenge Submission
According to a recent study on common practice in biomedical image analysis, reproduction and adequate interpretation of challenge results are often not possible because only a fraction of relevant information is typically reported. To address this issue, we have implemented this online platform for challenge proposal submissions with structured descriptions of challenge protocols. The parameters for challenge design have been developed in collaboration with multiple institutions worldwide.
Transparency
"The one practice that can universally commended is the transparent and complete reporting of all facets of a study, allowing a critical reader to evaluate the work and fully understand its strengths and limitations." Nature Neuroscience 2017, https://doi.org/10.1038/nn.4500
Interpretability
Challenges results will be interpretable.
Reproducibility
The structured challenge submission systems will make challenges reproducible.
Standardized approach
The standardized form makes it easy to include all important information without overseeing important aspects.
High quality challenges
Altogether, your challenge will be of high quality, putting all aspects together!
Current platforms
Currently, the submission system is used for the following MICCAI 2024 conference. In the past, it was used for the following conferences:
We developed a completely new version of the structured challenge submission system with modern, resilient technologies. It now offers a robust, scalable, and user-friendly platform for the standardized and high-quality submission of challenges!
1
April2023
MICCAI
MICCAI 2023: 46 brilliant challenges were accepted! Details on challenge days, virtual challenges etc will be provided on the main MICCAI website soon.
12
Dec2022
MICCAI
MICCAI 2023: We are sorry for severe technical issues on the submission website. We are working on a completely new system for next year to avoid these issues in the future. Stay tuned!
1
Nov2022
MICCAI
MICCAI 2023 is accepting submissions.
14
Dec2021
General
The structured challenge design description forms can now be found on this website. Check out BIAS, Overview or Help for details!
14
Dec2021
MICCAI
MICCAI 2022: Submission deadline passed. Thank you for 45 interesting challenge proposals!
15
Oct2021
MICCAI
The MICCAI 2022 structured challenge submission starts accepting proposals on November 1, 2021.
12
Mar2021
General
We have a new landing page! It combines the structured challenge submissions across all conferences.
1
Mar2021
MICCAI
20 MICCAI 2021 challenges, 4 MICCAI 2022 challenges and 4 MICCAI endorsed challenges were accepted!
BIAS Initiative
The number of biomedical image analysis challenges organized per year is steadily increasing. Recent research, however, revealed that common practice related to challenge reporting does not allow for adequate interpretation and reproducibility of results. To address the discrepancy between the impact of challenges and the quality (control), the Biomedical Image Analysis ChallengeS (BIAS) initiative developed a set of recommendations for the reporting of challenges. The BIAS statement aims to improve the transparency of the reporting of a biomedical image analysis challenge regardless of field of application, image modality or task category assessed.
The parameter list which is used to create challenge proposals was developed by the BIAS initiative that created a guideline on how to report and design challenges in a structured manner. The forms necessary to propose a (MICCAI endorsed) challenge can be downloaded below. For details, please check Overview > MICCAI endorsed events or the Help section.
Equator network
The Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network is a global initiative with the aim of improving the quality of research publications and research itself. A key mission in this context is to achieve accurate, complete and transparent reporting of health research studies to support reproducibility and usefulness. Between 2006 and 2019, more than 400 reporting guidelines have been published under the umbrella of the equator network. A well-known guideline is the CONSORT statement developed for reporting of randomized controlled trials. Prominent journals, such as Lancet, JAMA or the British Medical Journal require the CONSORT checklist to be submitted along with the actual paper when reporting results of a randomized controlled trial.
Lena Maier-Hein (Chair), Annika Reinke, Matthias Eisenmann, Sinan Onogur, Annette Kopp-Schneider, DKFZ Heidelberg
Spyridon Bakas, Perelman School of Medicine, University of Pennsylvania
Michal Kozubek, Masaryk University
Bennett A. Landman, Vanderbilt University
Anne L. Martel, Sunnybrook Research Institute; University of Toronto
Tal Arbel, McGill University
Allan Hanbury, Technische Universität (TU) Wien; Complexity Science Hub Vienna
Pierre Jannin, Université de Rennes 1, Inserm
Henning Müller, University of Applied Sciences Western Switzerland (HES-SO); University of Geneva
Julio Saez-Rodriguez, Heidelberg University; Heidelberg University Hospital; Westfälische Technische Hochschule (RWTH) Aachen
Bram van Ginneken, Radboud University Medical Center; Fraunhofer MEVIS
Paul Jäger, Fabian Isensee, Jens Petersen, Klaus Maier-Hein, DKFZ Heidelberg
Carole H. Sudre, Michela Antonelli, M. Jorge Cardoso, King's College London (KCL); University College London (UCL)
Veronika Cheplygina, IT University of Copenhagen
Keyvan Farahani, Ronald M. Summers, National Cancer Institute (NIH)
Ben Glocker, Imperial College London
Charles E. Kahn, Perelman School of Medicine, University of Pennsylvania
Jens Kleesiek, University Medicine Essen
Tashin Kurc, Stony Brook University
Geert Litjens, Radboud University Medical Center
Bjoern Menze, University of Zurich
Mauricio Reyes, Bern University Hospital
Nicola Rieke, NVIDIA; Technical University of Munich (TUM)
Bram Stieltjes, University Hospital of Basel
Sotirios A. Tsaftaris, The University of Edinburgh
Statistics
Several challenges have been proposed in the structured challenge submission system for multiple conferences. In this section, we will show statistics based on all (accepted) challenges in the system for MICCAI, ISBI and MIDL conferences.
The following challenges have been submitted and reviewed through the structured challenge submission system. Those challenges and others can also be found via the grand-challenge.org platform.
Organizing a challenge based on an accepted challenge proposal may be time-consuming, especially when large-scale data annotation is necessary. To address the traditionally tight schedule between MICCAI challenge acceptance and organization, we offered an early review of MICCAI 2024 challenge proposals in the call for MICCAI 2023 challenges. Organizers interested to organize a challenge in 2024 were able to receive an early acceptance that leaves time for careful preparation of the challenge. Only convincing challenge proposals were accepted in this round. Other challenges were asked to resubmit in the next call for challenges.
Diabetic Foot Ulcers Grand Challenge 2024
Medical Image De-Identification Benchmark
2023 Kidney and Kidney Tumor Segmentation Challenge
Airway-Informed Quantitative CT Imaging Biomarker for Fibrotic Lung Disease
A tumor and liver automatic segmentation challenge
Automated Lesion Segmentation in Whole-Body FDGPET/CT - Domain Generalization
Automated prediction of treatment effectiveness in ovarian cancer using histopathological images
Automatic Region-based Coronary Artery Disease diagnostics using x-ray angiography imagEs
AutomatiC Registration Of Breast cAncer Tissue 2023
Cardiac MRI Reconstruction Challenge
Cephalometric Landmark Detection in Lateral X-ray Images
Cerebral artery segmentation Challenge
Circle of Willis Intracranial Artery Classification and Quantification Challenge
Cross-Modality Domain Adaptation for Medical Image Segmentation
Dental Enumeration and Diagnosis on Panoramic X-rays Challenge
Endoscopic Vision Challenge 2023
Energy efficient deep learning for medical imaging
Energy-efficient Medical Image Processing
Fast, Low-resource, and Accurate oRgan and Pan-cancer sEgmentation in Abdomen CT
Robust Atria Segmentation from 3D Contrast-Enhanced Magnetic Resonance Imaging
MICCAI Computational Precision Medicine 2018
Multi-Organ Histopathology Nucleus Segmentation Challenge for H&E Stained Images 2018
Tumor Infiltrating Lymphocytes in Breast Cancer (TIGER) Challenge 2022
STOIC2021 - COVID-19 AI Challenge
Node21 - Nodule Detection in Chest Radiographs
Carotid Vessel Wall Segmentation Challenge 2021
Foot Ulcer Segmentation Challenge 2021
Multiple sclerosis new lesions segmentation challenge 2021
PAIP2021: Perineural Invasion in Multiple Organ Cancer (Colon, Prostate, and Pancreatobiliary tract)
You wish to organize a MICCAI endorsed challenge? Please fill the form and send it to MICCAI-board-challenge-WG(at)dkfz-heidelberg.de.
Help - How to organize a successful challenge
You want to organize a great challenge but are afraid that it might get rejected? You are new to the field of challenges and want to avoid common pitfalls? We collected points that will help you get your challenge accepted!
1. Complete and transparent reporting
Validation studies such as challenges need to be reproducible and interpretable. To achieve this, we recommend you to report all important challenge design parameters. Below, you can download the checklist for challenge design parameters which can directly be filled out. You wish to organize a MICCAI endorsed challenge? Please fill the form and send it to MICCAI-board-challenge-WG(at)dkfz-heidelberg.de.
2. Large dataset sizes
The dataset is one of the main components of your challenge. The rankings rely on the dataset as well as the general performance of the participating algorithms. In machine learning, you should try to acquire as much data as possible to avoid poor approximation of the algorithms. Of course, this is tough in the medical domain, but challenges with only very few cases may not attract a lot of researchers which therefore may lead to rejection.
3. Training-test dataset balance
The dataset should reflect the natural variability of the imaged objects. In the best case, it was acquired from multiple sources. The split of the dataset into training and test is very important for the algorithm performances. Be aware that imbalanced training data may lead to worse results. Furthermore, make sure that training and test data share similar distributions, but the test data should also be sufficient to test the generalization capabilities of algorithms.
4. Metrics/assessment aims reflecting the challenge goal
The metrics should be well chosen. Don't use a metric because many other researcher are using it without thinking it through. For example, the Dice Similarity metric is very popular in segmentation tasks, but it is not well-suited for small objects to be segmented. Define the goal of your challenge, define assessment aims and find the metrics that fit best. If you are unsure, here are some reads to consider:
Organizing a challenge also includes the analysis of results. The algorithm performance should be validated in a thoughtful manner. Is the ranking robust against small perturbations? What would happen to the ranking if you slightly change the test dataset? What would happen if you combine some of the submitted methods? Statistics is important. Don't leave it out. Check out the following links for details:
Why do you want to run your challenge? What is your motivation? Are you trying to find algorithms that exceed the current state of the art? Do you try to find solutions for an ill-posed problem? These questions should be pondered thoroughly from beginning.
7. Details and justification
Challenges are validation studies. As such, the design should be completely transparent and design choices should be justified and comprehensible. In the past, challenges were rejected because some design choices were not justified at all or not reproducible. We will present some examples of the most unclear parameters of rejected challenges. As described above, the dataset is highly important. It should be absolutely clear why you chose a specific training/test split. Next, the annotation process is as important as the data. Questions such as: 'Who annotated the data?' 'What was the experience level of annotaters?' 'Did multiple annotators annotate the data?' should be clearly answered as well as a link to the labeling protocol should be provided. Make sure to justify why you chose specific performance metrics. The metric should reflect your challenge goal (see below) and should assess valid properties. Be aware of mutually dependent metrics. The ranking computation should be clear, in the best case, write it in a pseudoalgorithmic form. Make clear why this computation was chosen. How does the submission of results work for the participants? Are they required to generate docker containers? Upload results? Upload code? All these questions are important to know.
Finally, take your best shot when filling out the submission form. We know, it's tough, there are many parameters. But all of them matter. The reviewers will know immediately when you filled the parameters out in a rush. You don't submit a paper full of typos, grammatical errors and incomplete sentences, right? So you also don't want to submit a challenge proposal in such a way.
Developer Team
The team is associated with the Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ) Heidelberg.