Percutaneous Nephrolithotomy (PCNL), is a minimally invasive surgical procedure for removal of kidney stones typically >1cm. The procedure involves inserting a needle through the patient’s skin into the kidney which is being more commonly performed now using ultrasound (US) guidance. Existing US image-guided needle insertion employed in PCNL faces the challenge in terms of keeping the needle tip visible during the insertion process. We propose a needle insertion mechanism with mirror-integrated US imaging, which provides an intuitive and simple solution to monitor the needle insertion path. This is achieved by using acoustic mirror to change the direction of the US image plane while the needle goes through the spacing in the middle of the acoustic mirror so that the needle path aligns with the US image plane. Both the needle and the acoustic mirror are designed to be rotatable to provide the clinician with the flexibility to search for the optimal needle insertion orientation. According to the law of acoustic wave reflection, the needle should rotate two times the amount of the mirror to keep aligned with the US image plane. A synchronization mechanism consisting of belts and pulleys was designed to achieve this 2:1 rotation ratio. Needle-mirror synchronized rotation is tested using an image-processing-based method. In terms of imaging functionality, US images display point targets inside the gelatin phantom as well as the needle tip clearly. In the needle insertion experiment, a needle is inserted into the gelatin phantom to reach point targets, and results show insertion errors <3mm. Overall, our results demonstrate the potential of using the proposed US image-guided access mechanism in clinical scenarios.
Teleoperated robotic technology has a great potential in delivering healthcare in alternative ways from in-person encounters such as surgery, diagnosis, and nursing with specialists remotely controlled. Most teleoperated procedures heavily rely on visual feedback from cameras to observe the situation. While ideally, the camera position and orientation should be optimized adaptively depending on the task and circumstance, the view adjustment by the operator is required either by dedicating concentration on camera control or pausing during the adjustment. Then, there is the demand for a more intuitive telepresence method to improve the efficiency and performance of the remote operation. This paper proposes a hands-free approach to control the camera view for an improved telepresence experience. The system comprises an RGBD camera mounted on a robotic arm and a motion tracking virtual reality (VR) head mount display (HMD) maps the human head and upper-body motion to the robotic arm for the immersive teleoperation task. Based on this setup, an Augmented Head Motion Mapping (AHMM) mode is introduced. Wherein this mode, the user can decide to control the camera following the head motion directly or following the remote center of motion (RCM) to the target location so that the reachable visual field can be expanded. Through the user study with seven subjects, we evaluated the proposed method compared with other conventional methods in terms of the reachable visual field, control intuitiveness, and task efficiency. The possibility of further enlarging the reachable visual field by introducing the motion scaling factor is investigated through the simulation. The result successfully demonstrated that an operator using the proposed system could examine a larger area on the given object within a similar amount of time with limited training.
Medical ultrasound is extensively used to define tissue textures and to characterize lesions, and it is the modality of choice for detection and follow-up assessment of thyroid diseases. Classical medical ultrasound procedures are performed manually by an occupational operator with a hand-held ultrasound probe. These procedures require high physical and cognitive burden and yield clinical results that are highly operator-dependent, therefore frequently diminishing trust in ultrasound imaging data accuracy in repetitive assessment. A robotic ultrasound procedure, on the other hand, is an emerging paradigm integrating a robotic arm with an ultrasound probe. It achieves an automated or semi-automated ultrasound scanning by controlling the scanning trajectory, region of interest, and the contact force. Therefore, the scanning becomes more informative and comparable in subsequent examinations over a long-time span. In this work, we present a technique for allowing operators to reproduce reliably comparable ultrasound images with the combination of predefined trajectory execution and real-time force feedback control. The platform utilized features a 7-axis robotic arm capable of 6-DoF force-torque sensing and a linear-array ultrasound probe. The measured forces and torques affecting the probe are used to adaptively modify the predefined trajectory during autonomously performed examinations and probe-phantom interaction force accuracy is evaluated. In parallel, by processing and combining ultrasound B-Mode images with probe spatial information, structural features can be extracted from the scanning volume through a 3D scan. The validation was performed on a tissue-mimicking phantom containing thyroid features, and we successfully demonstrated high image registration accuracy between multiple trials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.