From our cell phones to game controllers, haptics – the use of skin stimulation, often in the form of vibrations, to relay information – has expanded its pervasiveness over the last few decades. Most applications leverage low-cost solutions, such as linear resonant and eccentric rotating mass actuators, which are limited in either their bandwidth or response time. However, some time-sensitive applications that require complex representation of the environment, such as obstacle avoidance and emergency response, pose restrictive requirements for haptic technologies. To this end, our team has recently developed a new type of high-performance haptic actuators, based on composites of piezoelectric materials called macro fiber composites (MFCs). The MFCs are glued on an aluminum back, enclosed by a custom-made case, and put in contact with a hollow cylinder, filled with a dense material. The mass in the cylinder allows the tuning of the frequency response of the actuator, toward increasing the amplitude of the response in the frequency range in which skin is most sensitive (10-250 Hz). In this paper, we put forward a detailed characterization of this new type of haptic actuators. First, we experimentally detail their mechanical and piezoelectric response. We then assess their frequency response while varying the mass in the cylinder. Finally, we study how actuators would interact mechanically with the skin. To this end, we conduct experiments with an actuator in contact with a pre-stretched membrane, whose mechanical properties are within the range of variability of human skin. We measure the frequency response of the actuator while varying the pre-stretch level of the membrane, simulating different skin indentations. Our results demonstrate that this new type of actuators can maintain an amplitude over the skin discrimination threshold over a large bandwidth, while offering low latency due to the fast piezoelectric response times.
The prevalence of blindness and low vision is skyrocketing as the population ages. Independent, efficient, and safe navigation for persons with blindess and low vision requires hard work, practice, and development of strong orientation and mobility skills. In this vein, orientation and mobility training provides tools to familiarize oneself with new environments and maintain an independent lifestyle. In recent years, orientation and mobility training has adopted electronic travel aids, smart devices developed to assist those with blindness and low vision during navigation. However, learning how to use an electronic travel aid in orientation and mobility training sessions may prove dangerous for, as an end user. Early in use, the end user may misinterpret the information provided by the electronic travel aid. In fact, there may be a shallow learning curve during initial implementation. To this end, we built a multiplayer virtual reality platform to simulate an orientation and mobility training, involving trainer and trainee, for practicing with an electronic travel aid in a controlled, safe but realistic environment. We interfaced the virtual reality platform with a custom electronic travel aid created by our team. The electronic travel aid consists of a specially designed camera on a backpack and a haptic belt, along with software that can relays information about the location of near obstacles in the virtual environment through spatiotopic vibrotactile stimulation of the abdomen. In the virtual environment, the trainer can instruct the trainee in the use of the electronic travel aid while navigating complex urban environments. The efficacy of the communication between trainer and trainee towards teaching the correct use of the electronic travel aid and its performance in assisting navigation will be evaluated through series of systematic experiments.
The number of people experiencing vision loss and visual impairment is continuously increasing, concurrently to the aging of the population. Wearable electronic travel aids (ETAs) can be used to realize assistive, intelligent navigation systems that overcome some of the problems that this disability brings about. We propose a virtual reality (VR) environment that can simulate orientation and mobility training with an ETA developed by our team, toward enhancing mobility skills in visually impaired people. VR simulations can serve as surrogates for physical environments that might be too dangerous to visit in person during the initial sessions of the training.
Visual impairment represents a critical challenge for our society with 285 million affect worldwide; alarmingly, the prevalence is expected to triple by 2050. Supporting mobility is a chief priority for assistive technologies. In recent years, the integration of computer vision and haptic technologies has led to a number of wearable electronic travel aids (ETAs). Previously, we have proposed an ETA comprised of a computer vision system and a wearable haptic device in the form of a belt. The belt encompasses a two-by-five array of piezoelectric-based macro-fiber composite (MFC) actuators, which can generate vibrations on the abdomen when an oscillating voltage is applied across their electrodes. The computer vision system identifies position and distance of surrounding obstacles egocentrically and drives the actuators relative to the salience of the potential hazard(s). Despite promising pilots, the design, control, and optimization of the ETA requires substantial, potentially high-risk, and tedious training and testing to accommodate patient-specific behavioral idiosyncrasies and a variety of visual impairments. To address these issues, we employ a virtual reality (VR) platform that offers simulations of visual impairment by disease type and severity with front-end control. We review our early work on the first three visual impairments piloted in the platform, each with three levels of severity: mild, moderate and severe. The VR environment is interfaced with the ETA, which provides feedback to the user based on the position of virtual obstacles. These simulations allow safe, controlled, repeatable experiments with ETAs that can be performed with varying degrees of visual perception. Our framework can become a paradigm for the development and testing of ETAs, with other potential applications in disability awareness, education, and training.
Visual impairment constitutes a compelling issue for our society. The aging of our population will cause a rise in the number of individuals affected by these debilitating conditions, which often challenge people in their daily routine and bear significant healthcare costs and consequences on society at large. Technological progress offers unique means to improve life conditions for the visually impaired, who still rely on low-tech systems, such as canes and service dogs. Here, we present a novel design for a piezoelectric belt integrated into a backpack, which can provide vibrotactile stimulation to the abdomen to signal the presence of obstacles. The belt is composed of an array of ten macro-fiber composite (MFC) actuators, arranged in a matrix of five columns by two rows. Obstacle identification and localization is afforded by a computer vision system, developed by our collaborators. The output from the array of actuators is controlled by the computer vision system, such that, if an obstacle is identified in a certain “capture field”, the corresponding actuator in an egocentric and spatiotopically preserved fashion is activated. The actuators comprise an encapsulated aluminum-backed MFC, driven by a tunable astable multivibrator. The resonance frequency of the actuators is tailored by adding a variable mass to a hollow cylinder fashioned as a protusion secured to the tactor, transmitting vibrations from the MFC to the skin of the human-in-the-loop. This new design allows us to enhance the peak-to-peak displacement of actuators of more than a tenfold factor over the 10-200 Hz frequency range, thereby surpassing, with a robust margin, the 50 μm threshold necessary for reliable discrimination given abdominal somatosensation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.