Surgical tools detection for intraoperative surgical navigation system is essential for better coordination among surgical team in operating room. Because Orthopaedic surgery (OS) differs from laparoscopic, due to a large variety of surgical instruments and techniques making its procedures complicated. Compared to usual object detection in natural images, OS video images are confounded by inhomogeneous illumination; it is hard to directly apply existing studies that are developed for others. Additionally, acquiring Orthopaedic surgery videos is difficult due to recording of surgery videos in restricted surgical environment. Therefore, we propose a deep learning (DL) approach for surgery tools detection in OS videos by integrating knowledge of diverse representative surgery and non-surgery images of tools into the model using transfer learning (TL) and data augmentation. The proposed method has been evaluated for five surgical tools using knee surgery images following 10-fold cross validation. It shows, proposed model (mAP 62.46%) outperforms over conventional model (mAP 60%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.