There has recently been great progress in automatic segmentation of medical images with deep learning algorithms. In most works observer variation is acknowledged to be a problem as it makes training data heterogeneous but so far no attempts have been made to explicitly capture this variation. Here, we propose an approach capable of mimicking different styles of segmentation, which potentially can improve quality and clinical acceptance of automatic segmentation methods. In this work, instead of training one neural network on all available data, we train several neural networks on subgroups of data belonging to different segmentation variations separately. Because a priori it may be unclear what styles of segmentation exist in the data and because different styles do not necessarily map one-on-one to different observers, the subgroups should be automatically determined. We achieve this by searching for the best data partition with a genetic algorithm. Therefore, each network can learn a specific style of segmentation from grouped training data. We provide proof of principle results for open-sourced prostate segmentation MRI data with simulated observer variations. Our approach provides an improvement of up to 23% (depending on simulated variations) in terms of Dice and surface Dice coefficients compared to one network trained on all data.
Patients suffering from cerebral ischemia or subarachnoid hemorrhage, undergo a 4D (3D+time) CT Perfusion
(CTP) scan to assess the cerebral perfusion and a CT Angiography (CTA) scan to assess the vasculature. The
aim of our research is to extract the vascular information from the CTP scan. This requires thin-slice CTP
scans that suffer from a substantial amount of noise. Therefore noise reduction is an important prerequisite
for further analysis. So far, the few noise filtering methods for 4D datasets proposed in literature deal with
the temporal dimension as a 4th dimension similar to the 3 spatial dimensions, mixing temporal and spatial
intensity information. We propose a bilateral noise reduction method based on time-intensity profile similarity
(TIPS), which reduces noise while preserving temporal intensity information. TIPS was compared to 4D bilateral
filtering on 10 patient CTP scans and, even though TIPS bilateral filtering is much faster, it results in better
vessel visibility and higher image quality ranking (observer study) than 4D bilateral filtering.
3D CT Angiography (CTA) scans are currently used to assess the cerebral arteries. An additional 4D CT
Perfusion (CTP) scan is often acquired to determine perfusion parameters in the cerebral parenchyma. We
propose a method to extract a three dimensional volume showing either the arteries (arteriogram) or the veins
(venogram) from the 4D CTP scan. This would allow cerebrovascular assessment using the CTP scan and obviate
the need for acquiring an additional CTA scan. Preprocessing steps consist of registration of the time volumes
of the CTP scan using rigid registration and masking out extracranial structures, bone and air. Next a 3D
volume is extracted containing the vessels (vascular volume) by using the absolute area under the first derivative
curve in time. To segment the arteries and veins we use the time to peak of the contrast enhancement curve
combined with region growing within a rough vessel segmentation. Finally the artery/vein segmentation is used
to suppress either the veins or the arteries in the vascular volume to construct the arteriogram and venogram.
To evaluate the method, 11 arteriograms and venograms were visually inspected by an expert observer, with
special attention to the important cerebral arteries (Circle of Willis) and veins (straight and transverse sinus).
Results show that the proposed method is effective in extracting the major cerebral arteries and veins from CTP
scans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.