The modeling of hair is too difficult to simulate because of its number and shape, as well as the texture of the hair itself. The traditional way of constructing hair based on physics and geometry requires complex calculations and various parameters. In recent years, hair modeling methods based on single images, based on multiple images, and based on videos have begun to develop. The advantage is that modeling is fast. At present, the geometry of the hair is mainly represented by lines of three-dimensional points. In this paper, a three-dimensional multi-strip is used to represent the geometry of the hair. Through deep learning, the position and type of the hair in a single image are obtained, and the similar hairstyle model in the database is matched. The selected hair model and head model are connected and fixed by further fitting. Then we simulate dynamic hair by setting gravity, friction, collision detection, and other more. The model preserves the image appearance of the image as much as possible and can be used to simulate common hair geometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.