The purpose of this study was to develop a method to ascertain the likelihood of actual change in radiomics features measured from pairs of sequentially acquired CT images under variable CT scan settings. Fifteen realistic computational lung nodule models were simulated with varying volumes and morphologies (231-1,245 mm3 volume range with no, medium, and high spiculation). The virtual nodule models were degraded by noise magnitude, noise texture, and resolution properties representing those of five commercial CT systems operating under a wide range of scan and reconstruction settings (297, including 33 reconstruction kernels, three dose levels– CTDIvol = 1.90, 3.75, and 7.5 mGy, and three slice thicknesses– 0.625, 1.25, and 2.5 mm). Images of each nodule were synthesized five repeated times under each imaging condition for a total of 22,275 imaged nodules. The simulated nodule images were segmented using an automatic active contour segmentation algorithm, from which morphology features were calculated and compared to the ground-truth morphology features (aka, truth-based) to estimate the minimum difference, Dmin, between two feature measurements that could be reliably measured between any pair of imaging conditions (scanner, kernel, dose, etc.). Dmin was defined as the minimum difference in a radiomics feature for which a measured difference corresponded to true differences 95% of the time for a single segmentation algorithm. The mean value for Dmin ranged from 1.8% to 70.0% depending on the specific radiomics feature. An analysis of the volume feature revealed that the lowest Dmin occurred for slice thickness of 0.625mm and CTDIvol of 7.5 mGy. This study presents a method to translate radiomics features from measured feature differences to true difference that accounts for statistics of noise between conditions.
|