New photon-counting detectors (PCD) from Direct Conversion facilitate up to 1000 frames per second and 1 ms frame-acquisition times. When using these detectors for biplane contrast-media tracking in neuro-angiographic procedures, simultaneous acquisitions are utilized and cross-scatter between planes could degrade image quality. To quantify cross-scatter contributions, we model simultaneous biplane high-speed neuro-angiography in EGSnrc using the Zubal head phantom. Results indicate an increase in scatter due to cross-talk ranging from 4%-56% for AP projections and 48%-71% for lateral projections depending on detector orientation. This increase in scatter can be mitigated using anti-scatter grids, energy thresholding and increased air gaps.
Machine learning (ML) models were investigated to automatically detect the patient head shift from isocenter and cephalometric landmark locations as a surrogate for head size. Fluoroscopic images of a Kyoto Kagaku anthropomorphic head phantom were taken at various head shifts and magnification modes, to create an image database. One ML model predicts the patient head shift and the other model predicts the coordinates of the anatomical landmarks. The goal is to implement these two separate models into the Dose Tracking System (DTS) developed by our group for eye-lens dose prediction and eliminate the need for manual input by clinical staff.
Staff-dose management in fluoroscopic procedures is a continuing concern due to insufficient awareness of radiation dose levels. To maintain dose as low as reasonably achievable (ALARA), we have developed a software system capable of monitoring the procedure room scattered radiation and the dose to staff members in real-time during fluoroscopic procedures. The scattered-radiation display system (SDS) acquires imaging-system signal inputs to update technique and geometric parameters used to provide a color-coded mapping of room scatter. We have calculated a discrete look-up-table (LUT) of scatter distributions using Monte-Carlo (MC) software and developed an interpolation technique for the multiple parameters known to alter the spatial shape of the distribution. However, the file size for the LUT’s can be large (~2GB), leading to long SDS installation times in the clinic. Instead, this work investigated the speed and accuracy of a regressional neural network (RNN) that we developed for predicting the scatter distribution from imaging-system inputs without the need for the LUT and interpolation. This method greatly reduces installation time while maintaining real-time performance. Results using error maps derived from the structural similarity index indicate high visual accuracy of predicted matrices when compared to the MC-calculated distributions. Dose error is also acceptable with a matrix element-averaged percent error of 31%. This dose-monitoring system for staff members can lead to improved radiation safety due to immediate visual feedback of high-dose regions in the room during the procedure as well as enhanced reporting of individual doses post-procedure.
Staff dose management is a continuing concern in fluoroscopically-guided interventional (FGI) procedures. Being unaware of radiation scatter levels can lead to unnecessarily high stochastic and deterministic risks due to the effects of absorbed dose by staff members. Our group has developed a scattered-radiation display system (SDS) capable of monitoring system parameters in real-time using a controller-area network (CAN) bus interface and displaying a color-coded mapping of the Compton-scatter distribution. This system additionally uses a time-of-flight depth sensing camera to track staff member positional information for dose rate updates. The current work capitalizes on our body tracking methodology to facilitate individualized dose recording via human recognition using 16-bit grayscale depth maps acquired using a Microsoft Kinect V2. Background features are removed from the images using a depth threshold technique and connected component analysis, which results in a body silhouette binary mask. The masks are then fed into a convolutional neural network (CNN) for identification of unique body shape features. The CNN was trained using 144 binary masks for each of four individuals (total of 576 images). Initial results indicate high-fidelity prediction (97.3% testing accuracy) from the CNN irrespective of obstructing objects (face masks and lead aprons). Body tracking is still maintained when protective attire is introduced, albeit with a slight increase in positional data error. Dose reports are then able to be produced which contain cumulative dose to each staff member at the eye lens level, waist level, and collar level. Individualized cumulative dose reporting through the use of a CNN in addition to real-time feedback in the clinic will lead to improved radiation dose management.
The imaging parameters used in neurointerventional procedures were evaluated to better understand the exposure techniques used clinically and their impact on patient dose. All parameters are available on the imaging system’s network bus in real time for each exposure pulse during a procedure. The Canon Dose Tracking System (DTS), which we developed, records the parameters of each exposure event in a raw data log file of controller area network (CAN) packets. We have collected such log files for 120 neurointerventional cases. Parameters are extracted by converting the raw data log file to a reformatted text file using a MATLAB script. The text file is input into a Microsoft Visual Studio project which outputs a new text file, which, with a reference table, allows the parameters to be identified. A written Python script is used to extract the specific parameters that were to be evaluated and output a .csv file. These were then input into MATLAB for analysis. The parameters extracted were the kVp, beam filter type, mAs, and the cranial/caudal angle as well as the RAO/LAO angle for the frontal and lateral gantries for DA and pulsed fluoroscopy (PF) modes. The gantry angles ranged from 34⁰ CRA to 42⁰ CAU and from 114⁰ RAO to 91⁰ LAO for DA and PF, respectively. The median kVp was 84 and 73 and the average per frame mAs was about 11 and 1.8 for DA and PF, respectively. This analysis should allow a better understanding of clinical practice in order to relate technique to patient and staff dose.
The eye lens is a very radiosensitive organ and is at risk for cataractogenesis during neuro-interventional procedures. It is paramount that the lens is exposed to the x-ray beam as little as possible while still being able to complete the clinical task. In this preliminary investigation, a convolutional neural network (CNN) has been created in order identify if the lens is within the x-ray projection image and where it is located with the intent to facilitate lens dose estimation. The model was trained using a database of patient cases of radiographic skull images, which had different views, in order to generalize the data. The size of the dataset was increased by rotating the images at various angles and masks were created for each corresponding image by hand-contouring the eye socket in the image. For image segmentation, a U-Net model was used which consisted of a down-block, bottleneck, and up-block. Different network parameters were tested and receiver operating curves (ROCs), with Jaccard indices, were assessed to identify the best model. The end goal of this project is model implementation into the real-time Canon Dose Tracking System (DTS) during interventional fluoroscopic procedures. This will allow the DTS to have a more accurate identification of where the lens is, whether fully in the beam or only partially. With this information, a more accurate calculation of the eye lens dose can be done which allows for patients’ dose to be more carefully monitored.
KEYWORDS: Signal attenuation, Virtual reality, Cameras, Detection and tracking algorithms, Visualization, Opacity, Algorithm development, Object recognition, Monte Carlo methods, Matrices
We have developed a prototype scatter-display system (SDS) which includes a top-down view, virtual reality (VR) representation of an interventional room containing a color-coded scatter dose rate distribution in real-time. To represent various attenuating objects of interest in the room, such as the C-Arm gantry and ceiling mounted shield, the STL toolbox in Matlab was implemented to produce a 3D VR description of the objects. Attenuation by objects in the room will alter the dose distribution and may lead to shielding of individual staff members, and thus representation of those objects in the software is needed for precise dose rate estimations. Determination of the spatial regions of attenuation requires accurate specification of object position. To retain identification of a ceiling mounted shield, we implemented an open-source package which maintains object recognition using the depth camera feed of a Microsoft Kinect V2 and the features-from-accelerated-segment- test (FAST) algorithm in OpenCV for a dense sampling of salient features. The depth information from the identified object is transferred to an open-source robot operating system (ROS) wrapper for specification of the 3D position to be fed into the SDS. To compute the C-Arm gantry position, we take advantage of a controller area network (CAN) bus interfaced with the angiography system’s application programming interface (API). Methods for computing gantry and ceiling mounted shield shadow regions are discussed and demonstrated. FAST was applied to the ceiling-shield assembly’s flange with reliable recognition. Including object attenuation of room scatter in the SDS will facilitate accurate dose rate computation.
The functionality of a real-time, top-down view virtual reality (VR) display of scattered radiation during fluoroscopic interventional procedures is being expanded to incorporate automatic input of staff member locations. Microsoft Kinect V2 depth sensing camera input was integrated into an open-source Robot Operating System (ROS) wrapper to facilitate automatic extraction of relative landmark body feature coordinates. Coordinates for the torso are selected to represent the staff member location in the selected plane of scatter; these coordinates are stored in a text file to be input into the real-time scatter display system (SDS). Accuracy of the depth sensing camera was evaluated using a pinhole camera model. This model was also implemented in an ROS wrapper to calibrate the Microsoft Kinect V2. Calibrated values were then implemented within a coordinate transformation algorithm which converts the physical distance measurements in the frame of the Kinect to normalized coordinates used in Matlab for visualization of the top-down horizontal plane of the interventional suite. Impact on real-time performance was evaluated for both staff member position update on-screen as well as for the update of SDS image frames.
The lens of the eye can receive a substantial amount of radiation during neuro-interventional fluoroscopic procedures, increasing the risk of cataractogenesis for the patient. The purpose of this study is to investigate the variation of eye lens dose with a variation of the location of the beam isocenter in the head. The primary x-ray beam of a Toshiba (Canon) Infinix fluoroscopy machine was modeled using EGSnrc Monte Carlo code and the lens dose was calculated using 2 × 1010 photons incident on the anthropomorphic Zubal computational head phantom for each simulation. The Zubal phantom is derived from a CT scan of an average adult male and has internal organs, including the lenses, segmented for dose calculation. Computations were performed with the head shifted vertically +/- 4 cm and in the cranial-caudal and lateral directions incrementally up to 6 cm in either direction. At each position, the gantry was rotated to various LAO/RAO and CAU/CRA angles, both 5 cm × 5 cm and 10 cm x 10 cm entrance field sizes were used and the kVp was varied. The results show that substantial changes in lens dose occur when the head is shifted and can result in a dose difference between eyes of over 6 times at certain beam angles for the 5 cm × 5 cm field size. The results of this study should provide increased accuracy in lens dose estimation during neuro interventional procedures and, when incorporated into our real-time dose-tracking system, help interventionalists manage patient lens dose during the procedure to minimize risk.
The purpose of this study is to investigate how the scattered radiation distribution in the interventional procedure room varies with changes in cranial / caudal (CRA/CAU) and right anterior oblique / left anterior oblique (RAO/LAO) gantry angulation of a C-Arm fluoroscopic system to aid in staff dose management. The primary x-ray beam of a Toshiba Infinix fluoroscopy machine was modeled using EGSnrc (DOSXYZnrc) Monte Carlo code and the scattered radiation distributions were calculated using 5 x 109 photons incident on the Zubal computational phantom. The Zubal phantom is derived from a CT scan of an average adult male and is anthropomorphic with internal organs. The results show that substantial changes in the scatter dose are possible for the interventionalist next to the table with Cranial/Caudal and RAO/LAO angle variations. For frontal projections the largest change with CRA/CAU angle occurs below the table height, increasing by 50% at the position of the interventionalist next to the table for a 30 degree cranial angulation compared to a caudal angulation for a beam directed toward the abdomen. The scattered radiation distribution also is shown to change with different body regions such as the chest and abdomen. A library of 3D scatter dose-rate distributions is being developed to be implemented in a scatter display system for increased staff awareness of dose levels during procedures.
KEYWORDS: MATLAB, Virtual reality, Visualization, Cameras, Detection and tracking algorithms, Software development, Monte Carlo methods, Human-machine interfaces, Calibration, Video
We have been working on the development of a Scatter Display System (SDS) for monitoring and displaying scatterrelated dose to staff members during fluoroscopic interventional procedures. We have considered various methods for such a display using augmented reality (AR) and computer-generated virtual reality (VR). The current work focuses on development of the VR SDS display, which shows the color-coded scattered dose distribution in a horizontal plane at a selected height above the floor in a top-down view of the interventional suite. Reported is the first development of the methodology for real-time functionality of this software via integration of controller area network (CAN) bus digital signals from the Canon C-Arm Biplane System. Importing the CAN bus information allows immediate selection of the appropriate pre-calculated scatter dose distribution consistent with the x-ray beam orientation and characteristics as well as selection of the proper gantry and table graphic for the display. The Python CAN interface module was used for streamlining the integration of the CAN bus interface. Development of real-time functionality for the SDS allows it to provide feedback to staff during clinical procedures for informed dose management; the SDS can work alongside the patient skin dose tracking system (DTS) for complete clinical monitoring of staff and patient dose.
A new image receptor has recently been introduced that has a standard flat-panel detector (FPD) mode as well as a highdefinition (Hi-Def) zoom mode. The Dose Tracking System (DTS), which our group has developed, has been expanded in functionality to allow for the analysis of the skin-dose contribution of the Hi-Def mode during fluoroscopic interventional procedures. A clinical version of the DTS records all geometric and exposure technique parameters from a digital interface on the Canon Biplane Angiography System during interventional procedures in log files. Previous work on the enhancement of our group’s DTS led to the development of a replay function which facilitates playback of the log files. Within the replay feature, modifications have been made to allow for separate evaluation of exposures from each detector mode as identified by signals for the magnification (MAG) mode being used. The current work utilizes this separation method for neuro-interventional cases performed with the new image receptor to retrospectively analyze dose related contributions from the Hi-Def mode as compared to FPD usage. Peak skin dose (PSD) and dose area product (DAP) were evaluated for six clinical cases under IRB approval. Three de-identified log files were also included in order to demonstrate the method for separation of PSD as well as the variation with procedure types. Ratios of FPD PSD and DAP to Hi-Def values were determined for a subset of three cases during which the new image receptor was implemented.
The forward-scatter dose distribution generated by the patient table during fluoroscopic interventions and its contribution to the skin dose is studied. The forward-scatter dose distribution to skin generated by a water table-equivalent phantom and the patient table are calculated using EGSnrc Monte-Carlo and Gafchromic film as a function of x-ray field size and beam penetrability. Forward scatter point spread function’s (PSFn) were generated with EGSnrc from a 1×1 mm simulated primary pencil beam incident on the water model and patient table. The forward-scatter point spread function normalized to the primary is convolved over the primary-dose distribution to generate scatter-dose distributions. The utility of PSFn to calculate the entrance skin dose distribution using DTS (dose tracking system) software is investigated. The forward-scatter distribution calculations were performed for 2.32 mm, 3.10 mm, 3.84 mm and 4.24 mm Al HVL x-ray beams for 5×5 cm, 9×9 cm, 13.5×13.5 cm sized x-ray fields for water and 3.1 mm Al HVL x-ray beam for 16.5×16.5 cm field for the patient table. The skin dose is determined with DTS by convolution of the scatter dose PSFn’s and with Gafchromic film under PMMA “patient-simulating” blocks for uniform and for shaped x-ray fields. The normalized forward-scatter distribution determined using the convolution method for water table-equivalent phantom agreed with that calculated for the full field using EGSnrc within ±6%. The normalized forwardscatter dose distribution calculated for the patient table for a 16.5×16.5 cm FOV, agreed with that determined using film within ±2.4%. For the homogenous PMMA phantom, the skin dose using DTS was calculated within ±2 % of that measured with the film for both uniform and non-uniform x-ray fields. The convolution method provides improved accuracy over using a single forward-scatter value over the entire field and is a faster alternative to performing full-field Monte-Carlo calculations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.