Generative adversarial networks (GANs) have been used to successfully translate images between multiple imaging modalities. While there is a significant amount of literature on the use cases for these approaches, there has been limited investigation into the optimal model design and evaluation criteria. In this paper, we demonstrated the performance of different approaches on the task of cone-beam computer tomography (CBCT) to fan-beam computer tomography (CT) translation. We examined the implications of choosing between 2D and 3D models, the size of 3D patches, and the integration of the Structural Similarity Index Measure (SSIM) into the cycle-consistency loss. Additionally, we introduced a partially-invertible VNet architecture into the RevGAN framework, enabling the use of 3D UNet-like architectures with minimal memory footprint. We compared image similarity metrics to visual inspection as an evaluation method for these models using held-out patient data and phantom scans to demonstrate their generalizability. Our findings suggest that 3D models, despite requiring a longer training time to converge due to the number of parameters, produce fewer image perturbations compared to 2D models. Training with larger patches also improved stability and significantly reduced artifacts, but increased the training time, while the SSIM-L1 cycle-consistency loss function enhanced performance. Interestingly, our study revealed a discrepancy between standard image similarity metrics and visual evaluation, with the former failing to adequately penalize visually evident artifacts in synthetic CT scans. This underscores the need for tailored and standardized evaluation metrics for medical image translation, which would facilitate more accurate comparisons across studies. To further the clinical applicability of image-to-image translation, we have open-sourced our methods and experiments, available at github.com/ganslate-team.
There is a tremendous potential for AI-based quantitative imaging biomarkers to make clinical trials with standardof- care CT more efficient. There is, however, a well-recognized gap between discovery and the translation to practice for AI-based imaging biomarkers. Our goal is to enable more efficient and effective imaging clinical trials by characterizing the repeatability and reproducibility AI-based imaging biomarkers. We used virtual imaging clinical trials (VCTs) to simulate the data pathway by estimating the probability distributions functions for patient-, disease-, and imaging-related sources of variability. We evaluated the bias and variance in estimating the volume of liver lesions and the variance of an algorithm, that has shown success in predicting mortality risk for NSCLC patients. We used the volumetric XCAT anthropomorphic simulated phantom with inserted lesions with varied shape, size, and location. For CT acquisition and reconstruction we used the CatSim package and varied acquisition mAs and image reconstruction kernel. For each combination of parameters we generated 20 independent realizations with quantum and electronic noise. The resulting images were analyzed with the two AI-based imaging biomarkers described above, and from that we computed the mean and standard deviation of the results. Mean values and/or bias results were counter-intuitive in some cases, e.g. lower mean bias in scans with lower mAs. Addition of variations in lesion size, shape and location increased variance of the estimated parameters more than the mAs effects. These results indicate the feasibility of using VCTs to estimate the repeatability and reproducibility of AI-based biomarkers used in clinical trials with standard-of-care CT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.