Handwritten text recognition (HTR) is a challenging task that requires a large amount of diverse training data. One of the possible approaches to this problem is the adoption of CNNs. The key challenge is that the CNN requires geometrically labeled training data, which may increase the cost and time consumption of labeling. To overcome these limitations we propose the method, based on Generative Adversarial Network (GAN), which transfers handwriting styles to printed style images, preserving the Same geometrical Annotation as Input - SAIGAN. Taking printed style image as an input, it produces the handwritten image with the same text content located in the same positions. Our method operates on the character-level and can produce sequences of an arbitrary length and any content. Once trained, it is also possible to generate new handwriting styles by simply manipulating latent vectors. Proposed character style supervision allowed our model to surpass the basis method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.