Tongue clicks could soon help paralysed people steer their wheelchairs
By ANIThursday, December 2, 2010
LONDON - Paralysed, disabled, or quadriplegics may soon be able to operate wheelchairs using their tongues - thanks to an innovation led by an Indian-origin scientist.
The key is an in-ear device that listens out for clicks and tuts and translates the sounds into commands for a wheelchair, reports New Scientist.
Mouth interfaces that interact with wheelchairs are already quite common for severely disabled people.
“But everything that involves tongue movement requires putting something in the mouth,” said Ravi Vaidyanathan, who heads the project at the University of Bristol, UK.
Besides issues of hygiene, these devices make it difficult for the user to eat or speak while using it. Monitoring tongue movements through the ear avoids these problems.
The new device, a simple microphone, picks up low-frequency sounds made by four sorts of tongue click, chosen because each has a distinct local acoustic signature that the system won’t easily confuse with other sounds.
The microphone sends the information to a signal processor that categorises the clicks and passes the information to the wheelchair, where each click type moves the chair in a distinct direction.
Vaidyanathan’s group have so far used the interface to navigate a virtual wheelchair through a maze.
They have also used it to control the reaching and grasping movements of a robotic arm.
The four tongue clicks can be mastered in a couple of hours, said the researchers.
“This is a promising technique,” says Jose del R. Millan, who has been working on brain interfaces for controlling wheelchairs at the Swiss Federal Institute of Technology in Lausanne.
If it can be combined with real wheelchairs and shown to work reliably it could be very valuable for patients, he said.
The group do plan to start using real wheelchairs, said Vaidyanathan.
In the meantime, the researchers are looking at whether the number of commands can be increased by using a microphone in each ear, to get clearer signals.
Even if four clicks proves to be the limit, it only takes 0.2 seconds to detect each sound so it may be possible to generate high-level commands through combinations of clicking sounds, said Bristol team member Michael Mace.
The work was presented in October at the International Conference on Intelligent Robots and Systems in Taipei, Taiwan. (ANI)