With the development of Transformer and its derivatives over the last two years, several research have integrated Transformer with CNN or in instead of CNN to advance medical image segmentation. Although they often create acceptable feature maps for large organs, segmentation accuracy for small organs is less than satisfactory. Transformer excels at global context, but it displays limits in capturing fine-grained features, especially in medical areas, because convolution is unable to describe long-term correlations prevalent in images. This is because local information modeling lacks a spatial inductive bias. Currently available Transformer-based segmentation networks are not often optimized for this issue, we therefore use transformer as the backbone and provide a medical image segmentation network with deformation attention. The model uses attention mechanism to boost the impact of feature maps to address the issue of low accuracy of tiny organ segmentation in multi-organ segmentation tasks. On the Synapse dataset, our model has so far produced SOTA results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.