KEYWORDS: Transducers, Spatial filtering, Ultrasonography, Signal to noise ratio, Image quality, Apodization, Visualization, Design and modelling, Imaging systems, Data acquisition
PurposeCurrent ultrasound (US)-image-guided needle insertions often require an expertized technique for clinicians because the performance of tasks in a three-dimensional space using two-dimensional images requires operators to cognitively maintain the spatial relationships between the US probe, the needle, and the lesion. This work presents forward-viewing US imaging with a ring array configuration to enable needle interventions without requiring the registration between tools and targets.ApproachThe center-open ring array configuration allows the needle to be inserted from the center of the visualized US image, providing simple and intuitive guidance. To establish the feasibility of the ring array configuration, the design parameters causing the image quality, including the radius of the center hole and the number of ring layers and transducer elements, were investigated.ResultsExperimental results showed successful visualization, even with a hole in the transducer elements, and the target visibility was improved by increasing the number of ring layers and the number of transducer elements in each ring layer. Reducing the hole radius improved the region’s image quality at a shallow depth.ConclusionsForward-viewing US imaging with a ring array configuration has the potential to be a viable alternative to conventional US image-guided needle insertion methods.
Percutaneous Nephrolithotomy (PCNL), is a minimally invasive surgical procedure for removal of kidney stones typically >1cm. The procedure involves inserting a needle through the patient’s skin into the kidney which is being more commonly performed now using ultrasound (US) guidance. Existing US image-guided needle insertion employed in PCNL faces the challenge in terms of keeping the needle tip visible during the insertion process. We propose a needle insertion mechanism with mirror-integrated US imaging, which provides an intuitive and simple solution to monitor the needle insertion path. This is achieved by using acoustic mirror to change the direction of the US image plane while the needle goes through the spacing in the middle of the acoustic mirror so that the needle path aligns with the US image plane. Both the needle and the acoustic mirror are designed to be rotatable to provide the clinician with the flexibility to search for the optimal needle insertion orientation. According to the law of acoustic wave reflection, the needle should rotate two times the amount of the mirror to keep aligned with the US image plane. A synchronization mechanism consisting of belts and pulleys was designed to achieve this 2:1 rotation ratio. Needle-mirror synchronized rotation is tested using an image-processing-based method. In terms of imaging functionality, US images display point targets inside the gelatin phantom as well as the needle tip clearly. In the needle insertion experiment, a needle is inserted into the gelatin phantom to reach point targets, and results show insertion errors <3mm. Overall, our results demonstrate the potential of using the proposed US image-guided access mechanism in clinical scenarios.
Teleoperated robotic technology has a great potential in delivering healthcare in alternative ways from in-person encounters such as surgery, diagnosis, and nursing with specialists remotely controlled. Most teleoperated procedures heavily rely on visual feedback from cameras to observe the situation. While ideally, the camera position and orientation should be optimized adaptively depending on the task and circumstance, the view adjustment by the operator is required either by dedicating concentration on camera control or pausing during the adjustment. Then, there is the demand for a more intuitive telepresence method to improve the efficiency and performance of the remote operation. This paper proposes a hands-free approach to control the camera view for an improved telepresence experience. The system comprises an RGBD camera mounted on a robotic arm and a motion tracking virtual reality (VR) head mount display (HMD) maps the human head and upper-body motion to the robotic arm for the immersive teleoperation task. Based on this setup, an Augmented Head Motion Mapping (AHMM) mode is introduced. Wherein this mode, the user can decide to control the camera following the head motion directly or following the remote center of motion (RCM) to the target location so that the reachable visual field can be expanded. Through the user study with seven subjects, we evaluated the proposed method compared with other conventional methods in terms of the reachable visual field, control intuitiveness, and task efficiency. The possibility of further enlarging the reachable visual field by introducing the motion scaling factor is investigated through the simulation. The result successfully demonstrated that an operator using the proposed system could examine a larger area on the given object within a similar amount of time with limited training.
KEYWORDS: Transducers, Imaging systems, Data acquisition, Signal to noise ratio, Ultrasonography, Image registration, Visualization, Reconstruction algorithms, Image restoration, 3D printing
Current standard workflows of ultrasound (US)-guided needle insertion require physicians to use their both hands: holding the US probe to locate interested areas with the non-dominant hand and the needle with the dominant hand. This is due to the separation of functionalities for localization and needle insertion. This requirement does not only make the procedure cumbersome, but also limits the reliability of guidance given that the positional relationship between the needle and US images is interpreted with their experience and assumption. Although the US-guided needle insertion may be assisted through navigation systems, the recovery of the positional relationship between the needle and US images requires the usage of external tracking systems and image-based tracking algorisms that may involve the registration inaccuracy. Therefore, there is an unmet need for the solution that provides a simple and intuitive needle localization and insertion to improve the conventional US-guided procedure. In this work, we propose a new device concept based on the ring-arrayed forward-viewing (RAF) ultrasound imaging system. The proposed system is comprised with ring-arrayed transducers and an open whole inside the ring where the needle can be inserted. The ring array provides forward-viewing US images, where the needle can be visualized at the center of the reconstructed image without any registration. As the proof of concept, we designed several ring-arrayed configurations and visualized point targets using the forward-viewing US imaging through simulations and phantom experiments. The results demonstrated the successful target visualization and indicates the ring-arrayed US imaging has a potential to improve the US-guided needle insertion procedure to be simpler and more intuitive.
Recently, in the United States as well as other countries, a shortage of obstetrician and gynecologist (ob-gyns) has grown seriously. The obstetrics and gynecology have a high burnout rate compared to other medical specialties because of increased workloads and competing for administrative demands. Then, there is a demand for assisting the procedure of prenatal care, especially ultrasonography. Although several robotic-assisted ultrasound imaging platforms have been developed, there were few platforms focusing on prenatal care. In this paper, we proposed an ultrasonography assistance robot for prenatal care to improve the workload of obstetricians and gynecologists. In prenatal care, it is crucially important to satisfy the safety for the pregnant women and fetus compared to other regions of ultrasonography. This paper serves as the proof of concept of the ultrasonography assistance robot for prenatal care by demonstrating the scan of uterus and estimating amniotic fluid volume for assessing fetus health with the fetal US imaging phantom, and clinical feasibility to one pregnant woman. As the key technology to satisfy the safety and acquired image quality, the mechanism with constant springs that the US probe can be shifted flexibly depending on the abdominal height was proposed. The proposed robot system enabled to scan the entire uterus area keeping the contact force under the force applied in clinical procedures (about 15 N) to the fetus phantom. Additionally, as the first application for evaluating fetus health automatically, the system to estimate the amniotic fluid volume (AFV) based on the acquired US images with the robot system was developed and evaluated with the fetus phantom. The result shows estimation errors within 10%. Finally, we demonstrated the robotic US scan to one pregnant woman and successfully observed the body parts of fetus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.