The introduction of photon-counting detectors in x-ray computed tomography raises the question of how reconstruction algorithms should be adapted to photon-counting measurement data. The transition from energyintegrating to photon-counting detectors introduces new effects into the data model, such as pure Poisson statistics and increased cross talk between detector pixels, (e.g. due to charge sharing), but it is still not known in detail how these effects can be treated accurately by the reconstruction algorithm. In this work, we propose a new reconstruction method based on penalized-likelihood reconstruction that incorporates these effects. By starting from a simple, easily-solved reconstruction problem and adding correction terms for the additional physical effects, we obtain a series expansion for the solution to the image reconstruction problem. This approach serves the twofold purpose of (1) yielding a new, potentially faster method of incorporating complex detector models in the reconstruction process and (2) providing insight into the impact of the non-ideal physical effects on the reconstructed image. We investigate the potential for reconstructing images from simulated photon-counting energy-resolving CT data with the new algorithm by including correction terms representing pure Poisson statistics and interpixel cross talk; and we investigate the impact of these physical effects on the reconstructed images. Results indicate that using two correction terms gives good agreement with the converged solution, suggesting that the new method is feasible in practice. This new approach to image reconstruction can help in developing improved reconstruction algorithms for photon-counting CT.
Fractional Flow Reserve (FFR), the ratio of arterial pressure distal to a coronary lesion to the proximal pressure, is indicative of its hemodynamic significance. This quantity can be determined from invasive measurements made with a catheter, or by using computational methods incorporating models of the the coronary vasculature. One of the inputs needed by a model-based approach for estimating FFR from Computed Tomography Angiography (CTA) images (denoted FFR-CT) is the geometry of the coronary arteries, which requires segmentation of the coronary lumen. Several algorithms have been proposed for coronary lumen segmentation, including the recent application of machine learning techniques. For evaluating these algorithms or for training machine learning algorithms, manual segmentation of the lumen has been considered as ground truth. However, since there is inter-subject variability in manual segmentation, it would be useful to first assess the extent to which this variability affects the predicted FFR values. In the current study, we evaluated the impact of inter-subject variability in manual segmentation on computed FFR, using datasets with three different manual segmentations provided as part of the Rotterdam Coronary Artery Evaluation Framework. FFR was computed using a coronary blood flow model. Our results indicate that variability in manual segmentations on FFR estimates depend on the FFR value. For FFR ≥ 0.97, variability in manual segmentations does not impact FFR estimates, while, for lower FFR values, the variability in manual segmentations leads to significant variability in FFR. The results of this study indicate that researchers should exercise caution when treating manual segmentations as ground truth for estimating FFR from CTA images.
General Electric has designed an innovative x-ray photonic device that concentrates a polychromatic beam of diverging x-rays into a less divergent, parallel, or focused x-ray beam. The device consists of multiple, thin film multilayer stacks. X-rays incident on a given multilayer stack propagate within a high refractive index transmission layer while undergoing multiple total internal reflections from a novel, engineered multilayer containing materials of lower refractive index. Development of this device could lead to order-of-magnitude flux density increases, over a large broadband energy range from below 20 keV to above 300 keV. In this paper, we give an overview of the device and present GE’s progress towards fabricating prototype devices.
Coronary Artery Disease (CAD) is the leading cause of death globally [1]. Modern cardiac computed tomography angiography (CCTA) is highly effective at identifying and assessing coronary blockages associated with CAD. The diagnostic value of this anatomical information can be substantially increased in combination with a non-invasive, low-dose, correlative, quantitative measure of blood supply to the myocardium. While CT perfusion has shown promise of providing such indications of ischemia, artifacts due to motion, beam hardening, and other factors confound clinical findings and can limit quantitative accuracy. In this paper, we investigate the impact of applying a novel motion correction algorithm to correct for motion in the myocardium. This motion compensation algorithm (originally designed to correct for the motion of the coronary arteries in order to improve CCTA images) has been shown to provide substantial improvements in both overall image quality and diagnostic accuracy of CCTA. We have adapted this technique for application beyond the coronary arteries and present an assessment of its impact on image quality and quantitative accuracy within the context of dual-energy CT perfusion imaging. We conclude that motion correction is a promising technique that can help foster the routine clinical use of dual-energy CT perfusion. When combined, the anatomical information of CCTA and the hemodynamic information from dual-energy CT perfusion should facilitate better clinical decisions about which patients would benefit from treatments such as stent placement, drug therapy, or surgery and help other patients avoid the risks and costs associated with unnecessary, invasive, diagnostic coronary angiography procedures.
Metal artifacts have been a problem associated with computed tomography (CT) since its introduction. Recent techniques to mitigate this problem have included utilization of high-energy (keV) virtual monochromatic spectral (VMS) images, produced via dual-energy CT (DECT). A problem with these high-keV images is that contrast enhancement provided by all commercially available contrast media is severely reduced. Contrast agents based on higher atomic number elements can maintain contrast at the higher energy levels where artifacts are reduced. This study evaluated three such candidate elements: bismuth, tantalum, and tungsten, as well as two conventional contrast elements: iodine and barium. A water-based phantom with vials containing these five elements in solution, as well as different artifact-producing metal structures, was scanned with a DECT scanner capable of rapid operating voltage switching. In the VMS datasets, substantial reductions in the contrast were observed for iodine and barium, which suffered from contrast reductions of 97% and 91%, respectively, at 140 versus 40 keV. In comparison under the same conditions, the candidate agents demonstrated contrast enhancement reductions of only 20%, 29%, and 32% for tungsten, tantalum, and bismuth, respectively. At 140 versus 40 keV, metal artifact severity was reduced by 57% to 85% depending on the phantom configuration.
Image artifacts generated by metal implants have been a problem associated with CT since its introduction. Recent techniques to mitigate this problem have included the utilization of certain Dual-Energy CT (DECT) features. DECT can produce virtual monochromatic spectral (VMS) images, simulating how the data would appear if scanned at a single x-ray energy (keV). High-keV VMS images can greatly reduce the severity of metal artifacts. A problem with these high-keV images is that contrast enhancement provided by all commercially-available contrast media is severely reduced. It is therefore impossible to generate VMS images with simultaneous high contrast and minimized metal artifact severity. Novel contrast agents based on higher atomic number elements can maintain contrast enhancement at the higher energy levels where artifacts are reduced. This study evaluated three such candidate elements: bismuth, tantalum, and tungsten, as well as two conventional contrast elements: iodine and barium. A water-based phantom with vials containing these five elements in solution, as well as different artifact-producing metal structures, was scanned with a DECT scanner capable of rapid operating voltage switching. In the VMS datasets, substantial reductions in the contrast were observed for iodine and barium, which suffered from contrast reduction of 97 and 91% respectively at 140 versus 40 keV. In comparison under the same conditions, the novel candidate agents demonstrated contrast enhancement reductions of only 20, 29 and 32% for tungsten, tantalum and bismuth respectively. At 140 versus 40 keV, metal artifact severity was reduced by 57-85% depending on the phantom configuration.
The image quality entitlement is evaluated for multi-energy bin photon counting (PC) spectral CT relative to that of
energy integration an dual kVp (dkVp) imaging. Physics simulations of X-ray projection channel data and CT images
are used to map contrast-to-noise metrics for simple numerical phantoms objects with soft tissue, calcium and iodine
materials. The benefits are quantified under ideal detector conditions. Spectral optimization yields on the order of 2X
benefit for iodine visualization measured by CNR^2/dose in two different imaging modes: optimal energy weighting,
and optimal mono-energy imaging. In another case studied, strict dose equivalence is maintained by use of a composite
spectrum for PC simulation that combines simultaneously the two kVp excitations used sequentially for dkVp. In this
case, mono-energetic imaging of iodine contrast agent is shown to achieve 40% higher dose efficiency for photon
counting compared to dual kVp although non-ideal characteristics of the photon counting response can eliminate much
of this benefit.
KEYWORDS: X-rays, Electron beams, X-ray sources, X-ray imaging, Control systems, Sensors, Medical imaging, Temperature metrology, X-ray computed tomography, X-ray detectors
This paper presents a progress update with the development of a distributed x-ray source. We present a high level
summary of the source integration, simulation and experimental results, as well as challenges in electron beam focusing,
beam current gating, voltage isolation, and anode technologies. We present focal spot measurements, x-ray images and
a summary of our distributed x-ray source concept.
KEYWORDS: Monte Carlo methods, Sensors, Computer simulations, Signal detection, Computed tomography, X-rays, X-ray computed tomography, Scanners, 3D modeling, Aluminum
We present a new simulation environment for X-ray computed tomography, called CatSim. CatSim provides a research platform for GE researchers and collaborators to explore new reconstruction algorithms, CT architectures, and X-ray source or detector technologies. The main requirements for this simulator are accurate physics modeling, low computation times, and geometrical flexibility. CatSim allows simulating complex analytic phantoms, such as the FORBILD phantoms, including boxes, ellipsoids, elliptical cylinders, cones, and cut planes. CatSim incorporates polychromaticity, realistic quantum and electronic noise models, finite focal spot size and shape, finite detector cell size, detector cross-talk, detector lag or afterglow, bowtie filtration, finite detector efficiency, non-linear partial volume, scatter (variance-reduced Monte Carlo), and absorbed dose. We present an overview of CatSim along with a number of validation experiments.
KEYWORDS: Sensors, Imaging systems, Image filtering, X-ray computed tomography, X-rays, Scintillators, Data modeling, Data acquisition, Systems modeling, Signal attenuation
The material specificity of computed tomography is quantified using an experimental benchtop imaging system
and a physics-based system model. The apparatus is operated with different detector and system configurations each
giving X-ray energy spectral information but with different overlap among the energy-bin weightings and noise
statistics. Multislice, computed tomography sinograms are acquired using dual kVp, sequential source filters or a
detector with two scintillator/photodiodes layers. Basis-material and atomic number images are created by first
applying a material decomposition algorithm followed by filtered backprojection. CT imaging of phantom materials
with known elemental composition and density were used for model validation. X-ray scatter levels are measured with a
beam-blocking technique and the impact to material accuracy is quantified. The image noise is related to the intensity
and spectral characteristics of the X-ray source. For optimal energy separation adequate image noise is required. The
system must be optimized to deliver the appropriate high mA at both energies. The dual kVp method supports the
opportunity to separately engineer the photon flux at low and high kvp. As a result, an optimized system can achieve
superior material specificity in a system with limited acquisition time or dose. In contrast, the dual-layer and sequential
acquisition modes rely on a material absorption mechanism that yields weaker energy separation and lower overall
performance.
Third-generation CT architectures are approaching fundamental limits. Spatial resolution is limited by the focal spot size and the detector cell size. Temporal resolution is limited by mechanical constraints on gantry rotation speed, and alternative geometries such as electron-beam CT and two-tube-two-detector CT come with severe tradeoffs in terms of image quality, dose-efficiency and complexity. Image noise is fundamentally linked to patient dose, and dose-efficiency is limited by finite detector efficiency and by limited spatio-temporal control over the X-ray flux. Finally, volumetric coverage is limited by detector size, scattered radiation, conebeam artifacts, Heel effect, and helical over-scan. We propose a new concept, multi-source inverse geometry CT, which allows CT to break through several of the above limitations. The proposed architecture has several advantages compared to third-generation CT: the detector is small and can have a high detection efficiency, the optical spot size is more consistent throughout the field-of-view, scatter is minimized even when eliminating the anti-scatter grid, the X-ray flux from each source can be modulated independently to achieve an optimal noise-dose tradeoff, and the geometry offers unlimited coverage without cone-beam artifacts. In this work we demonstrate the advantages of multi-source inverse geometry CT using computer simulations.
The capabilities of flat panel interventional x-ray systems continue to expand, enabling a broader array of medical applications to be performed in a minimally invasive manner. Although CT is providing pre-operative 3D information, there is a need for 3D imaging of low contrast soft tissue during interventions in a number of areas including neurology, cardiac electro-physiology, and oncology. Unlike CT systems, interventional angiographic x-ray systems provide real-time large field of view 2D imaging, patient access, and flexible gantry positioning enabling interventional procedures. However, relative to CT, these C-arm flat panel systems have additional technical challenges in 3D soft tissue imaging including slower rotation speed, gantry vibration, reduced lateral patient field of view (FOV), and increased scatter. The reduced patient FOV often results in significant data truncation. Reconstruction of truncated (incomplete) data is known an "interior problem", and it is mathematically impossible to obtain an exact reconstruction. Nevertheless, it is an important problem in 3D imaging on a C-arm to address the need to generate a 3D reconstruction representative of the object being imaged with minimal artifacts. In this work we investigate the application of an iterative Maximum Likelihood Transmission (MLTR) algorithm to truncated data. We also consider truncated data with limited views for cardiac imaging where the views are gated by the electrocardiogram(ECG) to combat motion artifacts.
Multi-slice CT scanners use EKG gating to predict the cardiac phase during slice reconstruction from projection data. Cardiac phase is generally defined with respect to the RR interval. The implicit assumption made is that the duration of events in a RR interval scales linearly when the heart rate changes. Using a more detailed EKG analysis, we evaluate the impact of relaxing this assumption on image quality. We developed a reconstruction algorithm that analyzes the associated EKG waveform to extract the natural cardiac states. A wavelet transform was used to decompose each RR-interval into P, QRS, and T waves. Subsequently, cardiac phase was defined with respect to these waves instead of a percentage or time delay from the beginning or the end of RR intervals. The projection data was then tagged with the cardiac phase and processed using temporal weights that are function of their cardiac phases. Finally, the tagged projection data were combined from multiple cardiac cycles using a multi-sector algorithm to reconstruct images. The new algorithm was applied to clinical data, collected on a 4-slice (GE LightSpeed Qx/i) and 8-slice CT scanner (GE LightSpeed Plus), with heart rates of 40 to 80 bpm. The quality of reconstruction is assessed by the visualization of the major arteries, e.g. RCA, LAD, LC in the reformat 3D images. Preliminary results indicate that Cardiac State Driven reconstruction algorithm offers better image quality than their RR-based counterparts.
Using helical, multi-detector computed tomography (CT) imaging technology operating at sub-second scanning speeds, clinicians are investigating the capabilities of CT for cardiac imaging. In this paper, we describe the application of novel modeling tools to assess CT system capability. These tools allow us to quantify the capabilities of both hardware and software algorithms for cardiac imaging. The model consists of a human thorax, a dynamic model of a human heart, and a complete physics-based, CT system model. The use of the model to predict image quality is demonstrated by varying both the reconstruction algorithm (half-scan, sector-based) and CT system parameters (axial detector resolution). The mathematical tools described provide a means to rapidly evaluate new reconstruction algorithms and CT system designs for cardiac imaging.
Cardiac imaging is still a challenge to CT reconstruction algorithms due to the dynamic nature of the heart. We have developed a new reconstruction technique, called the Flexible Algorithm, which achieves high temporal resolution while it is robust to heart-rate variations. The Flexible Algorithm, first, retrospectively tags helical CT views with corresponding cardiac phases obtained from associated EKG. Next, it determines a set of views for each slice, a stack of which covers the entire heart. Subsequently, the algorithm selects an optimum subset of views to achieve the highest temporal resolution for the desired cardiac phase. Finally, it spatiotemporally filters the views in the selected subsets to reconstruct slices. We tested the performance of our algorithm using both a dynamic analytical phantom and clinical data. Preliminary results indicate that the Flexible Algorithm obtains improved spatiotemporal resolution for a large range of heart rates and variations than standard algorithms do. By providing improved image quality at any desired cardiac phase, and robustness to heart rate variations, the Flexible Algorithm enables cardiac applications in CT, including those that benefit from multiphase information.
With the introduction of helical, multi-detector computed tomography (CT) scanners having sub-second scanning speeds, clinicians are currently investigating the role of CT in cardiac imaging. In this paper, we describe a four-dimensional (4D) x-ray attenuation model of a human heart and the use of this model to assess the capabilities of both hardware and software algorithms for cardiac imaging. We developed a model of the human thorax, composed of several analytical structures, and a model of the human heart, constructed from several elliptical surfaces. A model for each coronary vessel consists of a torus placed at a suitable location on the heart's surface. The motion of the heart during the cardiac cycle was implemented by applying transformational operators to each surface composing the heart. We used the 4D model of the heart to generate forward projection data, which then became input into a model of a CT imaging system. The use of the model to predict image quality is demonstrated by varying both the reconstruction algorithm (sector-based, half-scan) and CT system parameters (gantry speed, spatial resolution). The mathematical model of the human heart, while having limitations, provides a means to rapidly evaluate new reconstruction algorithms and CT system designs for cardiac imaging.
Preliminary MTF and LCD results obtained on several volumetric computed tomography (VCT) systems, employing amorphous flat panel technology, are presented. Constructed around 20-cm x 20-cm, 200-mm pitch amorphous silicon x-ray detectors, the prototypes use standard vascular or CT x-ray sources. Data were obtained from closed-gantry, benchtop and C-arm-based topologies, over a full 360 degrees of rotation about the target object. The field of view of the devices is approximately 15 cm, with a magnification of 1.25-1.5, providing isotropic resolution at isocenter of 133-160 mm. Acquisitions have been reconstructed using the FDK algorithm, modified by motion corrections also developed by GE. Image quality data were obtained using both industry standard and custom resolution phantoms as targets. Scanner output is compared on a projection and reconstruction basis against analogous output from a dedicated simulation package, also developed at GE. Measured MTF performance is indicative of a significant advance in isotropic image resolution over commercially available systems. LCD results have been obtained, using industry standard phantoms, spanning a contrast range of 0.3-1%. Both MTF and LCD measurements agree with simulated data.
A framework for rapid and reliable design of Volumetric Computed Tomography (VCT) systems is presented. This work uses detailed system simulation tools to model standard and anthropomorphic phantoms in order to simulate the CT image and choose optimal system specifications. CT systems using small-pitch, 2-D flat area detectors, initially developed for x-ray projection imaging, have been proposed to implement Volume CT for clinical applications. Such systems offer many advantages, but there are also many trade-offs not fully understood that affect image quality. Although many of these effects have been studied in the literature for traditional CT applications, there are unique interactions for very high-resolution flat-panel detectors that are proposed for volumetric CT. To demonstrate the process we describe an example that optimizes the parameters to achieve high detectability for thin slices. The VCT system was modeled over a range of operating parameters, including: tube voltage, tube current, tube focal spot size, detector cell size, number of views, and scintillator thickness. The response surface, which captures the effects of system components on image quality, was calculated. Optimal and robust designs can be achieved by determining an operating point from the response equations, given the constraints. We verify the system design with images from standard and low contrast phantoms. Eventually this design tool could be used, in conjunction with clinical researchers, to specify VCT scanner designs, optimize imaging protocols, and quantify image accuracy and repeatability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.