With the arrival of population aging, the number of disabled people has increased, and existing populations with physical impairments. The dining problem is one of the most important problems they must solve. The feeding robot system has been introduced into the auxiliary nursing scene to reduce the burden of nursing staff. Multiple types of feeding robots have been developed. However most existing feeding robot systems still suffer from issues related to insufficient intelligence and convenience, with limited attention to user intention. To address this issue, we propose a vision-based algorithm for the interaction between the robot and users. This method effectively identifies user intentions for dining, menu selection, and chewing dynamics during meals. It enables the robot to operate more intelligently by the user’s intention without additional wearable devices, significantly enhancing user comfort and convenience. We conducted a series of experiments on dining intentions, selection menu intentions, and chewing dynamics during meals. The experimental results show that the average recognition rate of users’ dining intention is 98%, and the average recognition rate of chewing dynamics is 86.53%. This contribution presents an interactive approach for individuals without mobility, enhancing the intelligence of the feeding robot. It holds promise for future applications in nursing scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.