Deep learning-based (DL) reconstruction has been introduced in CT, with two major manufacturers offering such methods in the clinic, which are trained mostly on patient data (or a combination of patient and phantom data). Our purpose was to investigate the influence of DL-based reconstruction on object detectability compared to the current standard of iterative reconstruction in CT head routine protocols, combining a model observer analyzing the detectability of lesion-like objects (brain, bone and lung tissue equivalent, 5 mm diameter, 25mm length) in a commercial anthropomorphic head phantom. The phantom was scanned 10 times in two CT systems (same manufacturer, different model) with routine head protocol and images reconstructed with FBP, iterative (IR) and deep-learning (DL) based methods. As input for the model observer, ROIs were subtracted centered on the locations of the cylinders and for each of them four background locations were selected nearby. The locations of the ROIs in the phantom were analogous for both scanners’ data. The non-prewhitening matched filter with an eye filter (NPWE) model observer was applied (Burgess eye filter, peak at 4 cy/deg, 50 cm eye-monitor distance). In visual inspection, the phantom brain background ROIs showed differences in noise texture between the reconstruction methods, with a more uniform distribution for DL-based methods in both CT systems. The average d’ and range was, for system 1: [lung-FBP: -124.9 (-178.2, -99.1); IR: -126.7 (-188.2; -102.9); DL:-136.2 (-181.9, -119.3)]; [bone-FBP: 206.7 (166.7, 269.7); IR: 215.4 (175.8, 278.1); DL: 268.3 (215.3, 339.5)]; soft tissue-FBP: -14.6 (-19.6, -9.8); IR: -15.5 (-20.7; -10.2); DL:-18.8 (-24.6, -10.6)]. The NPWE model obtained consistent higher d’ values in the DL-based reconstructed images compared to iterative and FBP for the three materials for both systems.
Purpose: In addition to less frequent and more comprehensive tests, quality assurance (QA) protocol for a magnetic resonance imaging (MRI) scanner may include cursory daily or weekly phantom checks to verify equipment constancy. With an automatic image analysis workflow, the daily QA images can be further used to study scanner baseline performance and both long- and short-term variations in image quality. With known baselines and variation profiles, automatic error detection can be employed.
Approach: Four image quality parameters were followed for 17 MRI scanners over six months: signal-to-noise ratio (SNR), image intensity uniformity, ghosting artifact, and geometrical distortions. Baselines and normal variations were determined. An automatic detection of abnormal QA images was compared with image deviations visually detected by human observers.
Results: There were significant inter-scanner differences in the QA parameters. In some cases, the results exceeded commonly accepted tolerances. Scanner field strengths, or a unit being stationary versus mobile, did not have a clear relationship with the QA results.
Conclusions: The variations and baseline levels of image QA parameters can differ significantly between MRI scanners. Scanner specific error thresholds based on parameter means and standard deviations are a viable option for detecting abnormal QA images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.