Due to the substantial increase in semi-disabled and disabled individuals in our society, coupled with the scarcity of caregivers, providing meals for these patients is a crucial concern. To address this issue, robots specifically designed for meal-assistance have been created and implemented. Although many meal-assistance robots have been marketed, most of them can only implement the meal-assistance function in a simple way and do not consider the psychological feelings of the patient, and there are problems of simple meal-assistance trajectory and low anthropomorphism. To address these problems, we propose a novel robotic meal-assistance trajectory planning method that uses visual sensors instead of didactic methods to determine the patients’ mouth position, which significantly improves the success rate of robot meal-assistance. On the other hand, the method adds a transition segment thereby increasing patients’ comfort and reducing the cost of meal-assistance. Specifically, the method performs a quintic polynomial interpolation on the segments of the ready-to- fetch and reset, and linear and circular interpolation on the other segments, respectively, to obtain an efficient and highly anthropomorphic robotic food-assistance trajectory. To confirm the viability of the suggested approach, simulations and real-world experiments are executed. The results show that the meal-assistance trajectory planned by our proposed method not only has the characteristics of smooth and stable, high anthropomorphism and large meal intake, but also consider the variability and psychological comfort of individual patients, and has high versatility and usability.
With the arrival of population aging, the number of disabled people has increased, and existing populations with physical impairments. The dining problem is one of the most important problems they must solve. The feeding robot system has been introduced into the auxiliary nursing scene to reduce the burden of nursing staff. Multiple types of feeding robots have been developed. However most existing feeding robot systems still suffer from issues related to insufficient intelligence and convenience, with limited attention to user intention. To address this issue, we propose a vision-based algorithm for the interaction between the robot and users. This method effectively identifies user intentions for dining, menu selection, and chewing dynamics during meals. It enables the robot to operate more intelligently by the user’s intention without additional wearable devices, significantly enhancing user comfort and convenience. We conducted a series of experiments on dining intentions, selection menu intentions, and chewing dynamics during meals. The experimental results show that the average recognition rate of users’ dining intention is 98%, and the average recognition rate of chewing dynamics is 86.53%. This contribution presents an interactive approach for individuals without mobility, enhancing the intelligence of the feeding robot. It holds promise for future applications in nursing scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.