In recent years, deep learning technology has developed rapidly, and the application of deep neural networks in the medical image processing field has become the focus of the spotlight. This paper aims to achieve needle position detection in medical retinal surgery by adopting the target detection algorithm based on YOLOv5 as the basic deep neural network model. The state-of-the-art needle detection approaches for medical surgery mainly focus on needle structure segmentation. Instead of the needle segmentation, the proposed method in this paper contains the angle examination during the needle detection process. This approach also adopts a novel classification method based on the different positions of the needle to improve the model. The experiments demonstrate that the proposed network can accurately detect the needle position and measure the needle angle. The performance test of the proposed method achieves 4.80 for the average Euclidean distance between the detected tip position and the actual tip position. It also obtains an average error of 0.85 degrees for the tip angle across all test sets.
Analysis of tongue motion has been proven useful in gaining a better understanding of speech and swallowing disorders. Tagged magnetic resonance imaging (MRI) has been used to image tongue motion, and the harmonic phase processing (HARP) method has been used to compute 3D motion from these images. However, HARP can fail with large motions due to so-called tag (or phase) jumping, yielding highly inaccurate results. The phase vector incompressible registration algorithm (PVIRA) was developed using the HARP framework to yield smooth, incompressible, and diffeomorphic motion fields, but it can also suffer from tag jumping. In this paper, we propose a new method to avoid tag jumping occurring in the later frames of tagged MR image sequences. The new approach uses PVIRA between successive time frames and then adds their stationary velocity fields to yield a starting point from which to initialize a final PVIRA stage between troublesome frames. We demonstrate on multiple data sets that this method avoids tag jumping and produces superior motion estimates compared with existing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.