EAD Challenge: Multi-class artefact detection in video endoscopy
- Proceeding from EAD2019 challenge can be found at: http://ceur-ws.org/Vol-2366/
- Learn more about the workshop at Spotlight
Submit you abstract here: https://cmt3.research.microsoft.com/EAD2019
🇮🇹 Travel Grant Winners of EAD2019:
📈 Challenge workshop news:
- Multi-class artefact detection: Localization of bounding boxes and class labels for
67 artefact classes for given frames.
- Region segmentation: Precise boundary delineation of detected artefacts.
- Detection generalization: Detection performance independent of specific data type and source.
The Challenge (in detail)
Endoscopy is a widely used clinical procedure for the early detection of numerous cancers (e.g., nasopharyngeal, oesophageal adenocarcinoma, gastric, colorectal cancers, bladder cancer etc.), therapeutic procedures and minimally invasive surgery (e.g., laparoscopy). During this procedure an endoscope is used; a long, thin, rigid or flexible tube having a light source and a camera at the tip which allows to visualize inside of affected organs on a screen. A major drawback of these video frames is that they are heavily corrupted with multiple artefacts (e.g., pixel saturations, motion blur, defocus, specular reflections, bubbles, fluid, debris etc.). These artefacts not only present difficulty in visualizing the underlying tissue during diagnosis but also affect any post-analysis methods required for follow-ups (e.g., video mosaicking done for follow-ups and archival purposes, and video-frame retrieval needed for reporting). Accurate detection of artefacts is a core challenge in a wide-range of endoscopic applications addressing multiple different disease areas. The importance of precise detection of these artefacts is essential for high-quality endoscopic frame restoration and crucial for realising reliable computer assisted endoscopy tools for improved patient care.
This challenge proposal aims to address the following key problems inherent in all video endoscopy:
1) Multi-class artefact detection:
Existing endoscopy workflows detect only one artefact class which is insufficient to obtain high-quality frame restoration. In general, the same video frame can be corrupted with multiple artefacts, e.g. motion blur, specular reflections, and low contrast can be present in the same frame. Further, not all artefact types contaminate the frame equally. So, unless multiple artefacts present in the frame are known with their precise spatial location, clinically relevant frame restoration quality cannot be guaranteed. Another advantage of such detection is that frame quality assessments can be guided to minimise the number of frames that gets discarded during automated video analysis.
2) Multi-class artefact region segmentation:
Frame artefacts typically have irregular shapes that are non-rectangular and consequently are overestimated by the detected bounding boxes. Development of accurate semantic segmentation methods to precisely delineate the boundaries of each detected frame artefact will enable optimized restoration of video frames without sacrificing information.
3) Multi-class artefact generalisation:
It is important for algorithms to avoid biases induced by specific training data sets. Also, it is well known that expert annotation generation is time consuming and can be infeasible for many data institutions. In this challenge, we encourage the participants to develop machine learning algorithms that can be used across different endoscopic datasets worldwide based on our large combined dataset from 6 different institutions.
This challenge is of immediate interest to the endoscopic community comprising:
Image analysts – precise artefact detection for video restoration can assist in downstream analysis such as mosaicking, image retrieval, automated diagnosis.
Clinical endoscopists – artefact detection algorithm can help to train endoscopists to develop better imaging protocols.
Manufacturers/Industry - even though hardware specifications of endoscopes have improved allowing hi-definition acquisition, frame artefacts are inevitably still present. Precise identification of artefacts can lead to affective frame corrections. Thus, this challenge offers new opportunities for industries to correct for the frame quality issues in endoscopic video acquisition present widely due to motion (blur), view-point changes (pixel saturations or low contrast or specular reflection), and floating objects (debris, occlusion).
In addition, this challenge aims to target larger audience that includes people working on data obtained from different optical-based instruments (not limited to endoscopy) where different types of artefacts and noise are a serious issue. Also, detection methods have been widely used to solve various computer vision problems, so this challenge will offer a new insight towards video quality improvements.
📨 Join our google groups for updates and share your problems or help others (click here)
Number of participants: 937