NEWS - October 15, 2021 The MICCAI 2022 structured challenge submission starts accepting proposals on November 1, 2021!


Structured Challenge Submission

According to a recent study on common practice in biomedical image analysis, reproduction and adequate interpretation of challenge results are often not possible because only a fraction of relevant information is typically reported. To address this issue, we have implemented this online platform for challenge proposal submissions with structured descriptions of challenge protocols. The parameters for challenge design have been developed in collaboration with multiple institutions worldwide.

Transparency

"The one practice that can universally commended is the transparent and complete reporting of all facets of a study, allowing a critical reader to evaluate the work and fully understand its strengths and limitations." Nature Neuroscience 2017, https://doi.org/10.1038/nn.4500

Interpretability

Challenges results will be interpretable.

Reproducibility

The structured challenge submission systems will make challenges reproducible.

Standardized approach

The standardized form makes it easy to include all important information without overseeing important aspects.

High quality challenges

Altogether, your challenge will be of high quality, putting all aspects together!

Current platforms

Currently, the submission system is used for the following conferences:

client

MICCAI 2022

Submission system

Timeline
  • Website opens for submissions: 1 November 2021
  • Proposal submission deadline: 10 December 2021 (23:59 GMT)
  • Review deadline: 17 January 2022 (23:59 GMT)
  • First round of feedback: 24 January 2022
  • Rebuttal deadline: 11 February 2022 (23:59 GMT)
  • Final feedback: 25 February 2022
client

ISBI 2021

Submission system

Timeline
  • Website opens for submissions: 15 July 2020
  • Proposal submission deadline: 2 October 2020 (23:59 CEST)
  • Review deadline: 16 October 2020 (23:59 CEST)
  • Rebuttal deadline: 21 October 2020 (23:59 CEST)
  • Final feedback: 23 October 2020
client

MIDL 2020

Submission system

Timeline
  • Website opens for submissions: 1 December 2019
  • Proposal submission deadline: 13 January 2020 (23:59 CEST)
  • First round of feedback: 3 February 2021
  • Rebuttal deadline: 14 February 2021 (23:59 CEST)
  • Final feedback: 21 February 2021

News and updates

BIAS Initiative

The number of biomedical image analysis challenges organized per year is steadily increasing. Recent research, however, revealed that common practice related to challenge reporting does not allow for adequate interpretation and reproducibility of results. To address the discrepancy between the impact of challenges and the quality (control), the Biomedical Image Analysis ChallengeS (BIAS) initiative developed a set of recommendations for the reporting of challenges. The BIAS statement aims to improve the transparency of the reporting of a biomedical image analysis challenge regardless of field of application, image modality or task category assessed.
The parameter list which is used to create challenge proposals was developed by the BIAS initiative that created a guideline on how to report and design challenges in a structured manner.

Equator network

The Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network is a global initiative with the aim of improving the quality of research publications and research itself. A key mission in this context is to achieve accurate, complete and transparent reporting of health research studies to support reproducibility and usefulness. Between 2006 and 2019, more than 400 reporting guidelines have been published under the umbrella of the equator network. A well-known guideline is the CONSORT statement developed for reporting of randomized controlled trials. Prominent journals, such as Lancet, JAMA or the British Medical Journal require the CONSORT checklist to be submitted along with the actual paper when reporting results of a randomized controlled trial.

The BIAS guideline was registered with the equator network: BIAS: Transparent repoting of biomedical image analysis challenges

BIAS Members

  • Lena Maier-Hein (Chair), Annika Reinke, Matthias Eisenmann, Sinan Onogur, Annette Kopp-Schneider, DKFZ Heidelberg
  • Spyridon Bakas, Perelman School of Medicine, University of Pennsylvania
  • Michal Kozubek, Masaryk University
  • Bennett A. Landman, Vanderbilt University
  • Anne L. Martel, Sunnybrook Research Institute; University of Toronto
  • Tal Arbel, McGill University
  • Allan Hanbury, Technische Universität (TU) Wien; Complexity Science Hub Vienna
  • Pierre Jannin, Université de Rennes 1, Inserm
  • Henning Müller, University of Applied Sciences Western Switzerland (HES-SO); University of Geneva
  • Julio Saez-Rodriguez, Heidelberg University; Heidelberg University Hospital; Westfälische Technische Hochschule (RWTH) Aachen
  • Bram van Ginneken, Radboud University Medical Center; Fraunhofer MEVIS
  • Paul Jäger, Fabian Isensee, Jens Petersen, Klaus Maier-Hein, DKFZ Heidelberg
  • Carole H. Sudre, Michela Antonelli, M. Jorge Cardoso, King's College London (KCL); University College London (UCL)
  • Veronika Cheplygina, IT University of Copenhagen
  • Keyvan Farahani, Ronald M. Summers, National Cancer Institute (NIH)
  • Ben Glocker, Imperial College London
  • Charles E. Kahn, Perelman School of Medicine, University of Pennsylvania
  • Jens Kleesiek, University Medicine Essen
  • Tashin Kurc, Stony Brook University
  • Geert Litjens, Radboud University Medical Center
  • Bjoern Menze, University of Zurich
  • Mauricio Reyes, Bern University Hospital
  • Nicola Rieke, NVIDIA; Technical University of Munich (TUM)
  • Bram Stieltjes, University Hospital of Basel
  • Sotirios A. Tsaftaris, The University of Edinburgh

Statistics

Several challenges have been proposed in the structured challenge submission system for multiple conferences. In this section, we will show statistics based on all (accepted) challenges in the system for MICCAI, ISBI and MIDL conferences.

Number of challenges from 2018 to 2021 that were accepted (red) or rejected (green).

Number of task categories of accepted challenge tasks per year.

Number of tasks/sub-challenges of accepted challenges. The density of the number of tasks is visualized by a violin plot per year.

Number of imaging modalities of accepted challenge tasks.

Number of fields of application of accepted challenge tasks per year.

Number of training (green) and test (red) cases of accepted challenge tasks. The plots are shown on a log scale.

Challenges Overview

The following challenges have been submitted and reviewed through the structured challenge submission system. Those challenges and others can also be found via the grand-challenge.org platform.


Organizing a challenge based on an accepted challenge proposal may be time-consuming, especially when large-scale data annotation is necessary. To address the traditionally tight schedule between MICCAI challenge acceptance and organization, we offered an early review of MICCAI 2022 challenge proposals in the call for MICCAI 2021 challenges. Organizers interested to organize a challenge in 2022 were able to receive an early acceptance that leaves time for careful preparation of the challenge. Only convincing challenge proposals were accepted in this round. Other challenges were asked to resubmit in the next call for challenges.


3D Teeth Scan Segmentation and Labelling Challenge

A website is not yet established.

Diabetic Foot Ulcers Grand Challenge 2022

Whole-heart and Great Vessel Segmentation from 3D Cardiovascular Magnetic Resonance Images in Congenital Heart Disease (Part II)


2021 Kidney and Kidney Tumor Segmentation Challenge

A website is not yet established.

Cross-Modality Domain Adaptation for Medical Image Segmentation

Diabetic Foot Ulcers Grand Challenge 2021

Deep Generative Model Challenge for Domain Adaptation in Surgery 2021

Fetal Brain Tissue Annotation and Segmentation Challenge

A website is not yet established.

Multi-Disease, Multi-View & Multi-Center Right Ventricular Segmentation in Cardiac MRI

RSNA-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2021

Diffusion-Simulated Connectivity Challenge

Fast and Low GPU Memory Abdominal Organ Segmentation in CT

A website is not yet established.

HEad and neCK TumOR segmentation in 3D PET/CT images

A website is not yet established.

Medical Out-of-Distribution Analysis Challenge 2021

MItosis DOmain Generalization Challenge 2021

MRI reconstruction challenge with realistic noise

A website is not yet established.

Quantification of Uncertainties in Biomedical Image Quantification 2021

SARAS challenge for Multi-domain Endoscopic Surgeon Action Detection

A website is not yet established.

Towards the Automatization of Cranial Implant Design in Cranioplasty: 2nd MICCAI Challenge on Automatic Cranial Implant Design

A website is not yet established.


A-AFMA: Automatic amniotic fluid measurement and analysis from ultrasound video

CTC: 6th ISBI Cell Tracking Challenge

EndoCV2021: Addressing generalisability in “polyp” detection and segmentation challenge

MitoEM: Large-scale Mitochondria 3D Instance Segmentation from Electron Microscopy Images

RIADD: Retinal Image Analysis for multi-Disease Detection

SegPC-2021: Segmentation of Multiple Myeloma Plasma Cells in Microscopic Images


2nd Retinal Fundus Glaucoma Challenge

3D Head and Neck Tumor Segmentation in PET/CT

Anatomical Brain Barriers to Cancer Spread: Segmentation from CT and MR Images

Automated Segmentation Of Coronary Arteries

MICCAI Brain Tumor Segmentation (BraTS) 2020 Benchmark: "Prediction of Survival and Pseudoprogression"

Towards the Automatization of Cranial Implant Design in Cranioplasty

Rib Fracture Detection and Classification Challenge

Multi-sequence CMR based myocardial pathology segmentation challenge

Intracranial Aneurysm Detection and Segmentation Challenge

Diabetic Foot Ulcers Grand Challenge 2020

Multi-Centre, Multi-Vendor & Multi-Disease Cardiac Image Segmentation Challenge

Large Scale Vertebrae Segmentation Challenge

Thyroid Nodule Segmentation and Classification in Ultrasound Images

The PANDA challenge: Prostate cANcer graDe Assessment using the Gleason grading system

Automatic Structure Segmentation for Radiotherapy Planning Challenge 2020

Challenge withdrawn due to COVID-19 pandemic situation.

International Skin Imaging Collaboration (ISIC) Challenge: using dermoscopic image context to diagnose melanoma

Automatic Evaluation of Myocardial Infarction from Delayed-Enhancement Cardiac MRI

Cerebral Aneurysm Detection and Analysis

Medical Out-of-Distribution Analysis Challenge

Quantification of Uncertainties in Biomedical Image Quantification

Automatic Lung Cancer Detection and Classification in Whole-slide Histopathology

Computational Precision Medicine Radiology-Pathology challenge on Brain Tumor Classification 2020

Super-resolution of Multi Dimensional Diffusion MRI data


White Matter Microstructure with Diffusion MRI Challenge

Multi-organ Nuclei Segmentation And Classification Challenge 2020

Endoscopy vision challenge on segmentation and detection

Diabetic Retinopathy Assessment Grading and Diagnosis

Automatic Detection challenge on Age-related Macular degeneration


Automatic detection of foreign objects on chest X-rays

Multi-channel Magnetic Resonance Image Reconstruction Challenge

SARAS endoscopic vision challenge for surgeon action detection


MICCAI Automatic Prostate Gleason Grading Challenge 2019

MS-CMRSeg 2019: Multi-sequence Cardiac MR Segmentation Challenge

MICCAI Grand Challenge on 6-month Infant Brain MRI Segmentation 2019

Large Scale Vertebrae Segmentation Challenge 2019

MICCAI Endoscopic Vision Challenge 2019

Computational Precision Medicine challenge on brain tumor classification 2019

MICCAI 2019 Challenge on accurate automated spinal curvature estimation

The 2nd MICCAI Challenge for Correction of Brain shift with Intra-Operative Ultrasound

Automatic Structure Segmentation for Radiotherapy Planning Challenge 2019

MICCAI Angle closure Glaucoma Evaluation Challenge 2019

Digestive-System Pathological Detection and Segmentation Challenge

MICCAI Multimodal Brain Tumor Segmentation (BraTS) 2019 Benchmark: "Survival Prediction"

MICCAI Automatic Generation of Cardiovascular Diagnostic Report Challenge 2019

Cardiac Resynchronization Therapy electrophysiological challenge 2019

Left Ventricle Full Quantification Challenge MICCAI 2019

International Skin Imaging Collaboration (ISIC) Challenge on Dermoscopic Skin Lesion Analysis 2019

MICCAI Kidney Tumor Segmentation Challenge 2019

MUlti-dimensional DIffusion (MUDI) MRI Challenge 2019

MICCAI Challenge 2018 for Correction of Brainshift with Intra-Operative UltraSound


International Skin Imaging Collaboration (ISIC) Challenge on Dermoscopic Skin Lesion Analysis Toward Melanoma Detection 2018

MICCAI Multimodal Brain Tumor Segmentation (BraTS) Benchmark: 'Survival Prediction'

MICCAI Endoscopic Vision Challenge 2018

MICCAI Grand Challenge on MR Brain Image Segmentation

Ischemic Stroke Lesion Segmentation Challenge 2018

Multi-shell diffusion MRI harmonisation and enhancement challenge

Left Ventricle Full Qualification Challenge

MICCAI Intervertebral Disc Segmentation Challenge 2018

Robust Atria Segmentation from 3D Contrast-Enhanced Magnetic Resonance Imaging

MICCAI Computational Precision Medicine 2018

Multi-Organ Histopathology Nucleus Segmentation Challenge for H&E Stained Images 2018


Carotid Vessel Wall Segmentation Challenge 2021

Foot Ulcer Segmentation Challenge 2021

Multiple sclerosis new lesions segmentation challenge 2021

PAIP2021: Perineural Invasion in Multiple Organ Cancer (Colon, Prostate, and Pancreatobiliary tract)

Help - How to organize a successful challenge

You want to organize a great challenge but are afraid that it might get rejected? You are new to the field of challenges and want to avoid common pitfalls? We collected seven reasons why challenges were rejected in the past that you should avoid!

1. Small datasets

The dataset is one of the main components of your challenge. The rankings rely on the dataset as well as the general performance of the participating algorithms. In machine learning, you should try to acquire as much data as possible to avoid poor approximation of the algorithms. Of course, this is tough in the medical domain, but challenges with only very few cases may not attract a lot of researchers which therefore may lead to rejection.

2. Training-test dataset imbalance

The dataset should reflect the natural variability of the imaged objects. In the best case, it was acquired from multiple sources. The split of the dataset into training and test is very important for the algorithm performances. Be aware that imbalanced training data may lead to worse results. Furthermore, make sure that training and test data share similar distributions, but the test data should also be sufficient to test the generalization capabilities of algorithms.

3. Metric/assessment aim doesn't fit the challenge goal

The metrics should be well chosen. Don't use a metric because many other researcher are using it without thinking it through. For example, the Dice Similarity metric is very popular in segmentation tasks, but it is not well-suited for small objects to be segmented. Define the goal of your challenge, define assessment aims and find the metrics that fit best. If you are unsure, here are some reads to consider:

4. No statistical analysis or uncertainty analysis

Organizing a challenge also includes the analysis of results. The algorithm performance should be validated in a thoughtful manner. Is the ranking robust against small perturbations? What would happen to the ranking if you slightly change the test dataset? What would happen if you combine some of the submitted methods? Statistics is important. Don't leave it out. Check out the following links for details:

5. Challenge motivation not convincing

Why do you want to run your challenge? What is your motivation? Are you trying to find algorithms that exceed the current state of the art? Do you try to find solutions for an ill-posed problem? These questions should be pondered thoroughly from beginning.

6. Too little information provided/no justification

Challenges are validation studies. As such, the design should be completely transparent and design choices should be justified and comprehensible. In the past, challenges were rejected because some design choices were not justified at all or not reproducible. We will present some examples of the most unclear parameters of rejected challenges. As described above, the dataset is highly important. It should be absolutely clear why you chose a specific training/test split. Next, the annotation process is as important as the data. Questions such as: 'Who annotated the data?' 'What was the experience level of annotaters?' 'Did multiple annotators annotate the data?' should be clearly answered as well as a link to the labeling protocol should be provided. Make sure to justify why you chose specific performance metrics. The metric should reflect your challenge goal (see below) and should assess valid properties. Be aware of mutually dependent metrics. The ranking computation should be clear, in the best case, write it in a pseudoalgorithmic form. Make clear why this computation was chosen. How does the submission of results work for the participants? Are they required to generate docker containers? Upload results? Upload code? All these questions are important to know.

7. Many typos, grammatical errors, incomplete sentences

Finally, take your best shot when filling out the submission form. We know, it's tough, there are many parameters. But all of them matter. The reviewers will know immediately when you filled the parameters out in a rush. You don't submit a paper full of typos, grammatical errors and incomplete sentences, right? So you also don't want to submit a challenge proposal in such a way.

Developer Team

The team is associated with the Division of Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ) Heidelberg.

Annika Reinke

Developer

Sinan Onogur

Developer

Matthias Eisenmann

Developer

Lena Maier-Hein

PI