Presentation + Paper
15 February 2021 Annotation quality vs. quantity for deep-learned medical image segmentation
Author Affiliations +
Abstract
For medical image segmentation, deep learning approaches using convolutional neural networks (CNNs) are currently superseding classical methods. For good accuracy, large annotated training data sets are required. As expert annotations are costly to acquire, crowdsourcing–obtaining several annotations from a large group of non-experts–has been proposed. Medical applications, however, require a high accuracy of the segmented regions. It is agreed that a larger training set yields increased CNN performance. However, it is unclear, to which quality standards the annotations need to comply to for sufficient accuracy. In case of crowdsourcing, this translates to the question on how many annotations per image need to be obtained. In this work, we investigate the effect of the annotation quality used for model training on the predicted results of a CNN. Several annotation sets with different quality levels were generated using the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm on crowdsourced segmentations. CNN models were trained using these annotations and the results were compared to a ground-truth. It was found that increasing annotation quality results in a better performance of the CNN in a logarithmic way. Furthermore, we evaluated whether a higher number of annotations can compensate lower annotation quality by comparing CNN predictions from models trained on differently sized training data sets. We found that when a minimum quality of at least 3 annotations per image can be acquired, it is more efficient to then distribute crowdsourced annotations over as many images as possible. The results can serve as a guideline for the image assignment mechanism of future crowdsourcing applications. The usage of gamification, i.e., getting users to segment as many images of a data set as possible for fun, is motivated.
Conference Presentation
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Tim Wesemeyer, Malte-Levin Jauer, and Thomas M. Deserno "Annotation quality vs. quantity for deep-learned medical image segmentation", Proc. SPIE 11601, Medical Imaging 2021: Imaging Informatics for Healthcare, Research, and Applications, 116010C (15 February 2021); https://doi.org/10.1117/12.2582226
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Medical imaging

Convolutional neural networks

Data modeling

Health informatics

Back to Top