Immersive, stereoscopic displays may be instrumental to better interpreting 3-dimensional (3D) data. Further- more, the advent of commodity-level virtual reality (VR) hardware has made this technology accessible for meaningful applications, such as medical education. Accordingly, in the current work we present a commodity- level, immersive simulation for interacting with human ear anatomy. In the simulation, users may interact simultaneously with high resolution computed tomography (CT) scans and their corresponding, 3D anatomical structures. The simulation includes: (1) a commodity level, immersive virtual environment presented by the Oculus CV1, (2) segmented 3D models of head and ear structures generated from a CT dataset, (3) the ability to freely manipulate 2D and 3D data synchronously, and (4) a user-interface which allows for free exploration and manipulation of data using the Oculus touch controllers. The system was demonstrated to 10 otolaryngolo- gists for evaluation. Physicians were asked to supply feedback via both questionnaire and discussion in order to determine the efficacy of the current system as well as the most pertinent applications for future research.
Ideal treatment of trauma, especially that which is sustained during military combat, requires rapid management to optimize patient outcomes. Medical transport teams `scoop-and-run' to trauma centers to deliver the patient within the `golden hour', which has been shown to reduce the likelihood of death. During transport, emergency medical technicians (EMTs) perform numerous procedures from tracheal intubation to CPR, sometimes documenting the procedure on a piece of tape on their leg, or not at all. Understandably, the EMT's focus on the patient precludes real-time documentation; however, this focus limits the completeness and accuracy of information that can be provided to waiting trauma teams. Our aim is to supplement communication that occurs en-route between point of injury and receiving facilities, by passively tracking and identifying the actions of EMTs as they care for patients during transport. The present work describes an initial effort to generate a coordinate system relative to patient's body and track an EMT's hands over the patient as procedures are performed. This `patient space' coordinate system allows the system to identify which areas of the body were the focus of treatment (e.g., time spent over the chest may indicate CPR while time spent over the face may indicate intubation). Using this patient space and hand motion over time in the space, the system can produce heatmaps depicting the parts of the patient's body that are treated most. From these heatmaps and other inputs, the system attempts to construct a sequence of clinical procedures performed over time during transport.
KEYWORDS: Virtual reality, Computed tomography, Visualization, 3D modeling, Surgery, Data modeling, Medical imaging, Head-mounted displays, 3D displays, 3D image processing
Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.
This paper evaluates the performance of two non-rigid image registration techniques. The moving least squares (MLS)
technique is compared to the more common thin-plate spline (TPS) method. Both methods interpolate a set of fiducial
points in registering two images. An attractive feature of the MLS method is that it seeks to minimize local scaling and
shearing, producing a global transformation that is as rigid as possible. The MLS and TPS techniques are applied to twoand
three-dimensional medical images. Both qualitative and quantitative comparisons are presented. The two techniques
are quantitatively evaluated by computing target registration errors (TREs) at selected points of interest. Our results
indicate that the MLS algorithm performs better than the TPS method with lower TRE values and visually better
registered images, indicating that MLS may be a better candidate for registration tasks when rigid registration is
insufficient but the deformation field is sought to be minimal.
We are developing and evaluating a system that will facilitate the placement of deep brain stimulators (DBS) used to
treat movement disorders including Parkinson's disease and essential tremor. Although our system does not rely on the
common reference system used for functional neurosurgical procedures, which is based on the anterior and posterior
commissure points (AC and PC), automatic and accurate localization of these points is necessary to communicate the
positions of our targets. In this paper, we present an automated method for AC and PC selection that uses non-rigidly
deformable atlases. To evaluate the accuracy of our multi-atlas based method, we compare it against the manual
selection of the AC and PC points by 43 neurosurgeons (38 attendings and 5 residents) and show that its accuracy is submillimetric
compared to the median of their selections. We also analyze the effect of AC-PC localization inaccuracy on
the localization of common DBS targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.