Wednesday, 27 November 2013
A Dual-Mode Human Computer Interface Combining Speech and Tongue Motion for People with Severe Disabilities
Individuals with severe disabilities, such as those paralyzed as a result of spinal cord injuries (SCI) at levels C4 and above, stroke, amyotrophic lateral sclerosis (ALS), or traumatic brain injuries (TBI), heavily rely on assistive technologies (AT) to carry out various tasks in their everyday lives. Among ATs, those providing alternative control for computer access and wheeled mobility are considered the most important for today’s active lifestyle since they can improve the users’ quality of life (QoL) by easing two major limitations: effective communication and independent mobility. Computers and internet are regarded as great equalizers that allow all individuals to have similar vocational and recreational opportunities. It is generally accepted that once an individual with disability is “enabled” to move around and effectively access computers or smart phones, he/she can virtually do most of the things that able-bodied individuals with educational, administrative, or scholarly careers do on a daily basis. This has resulted in a considerable amount ongoing research towards developing new ATs that can potentially take advantage of any remaining abilities of these individuals. We are presenting a new wireless and wearable human computer interface called the dual-mode Tongue Drive System (dTDS), which is designed to allow people with severe disabilities to use computers more effectively with increased speed, flexibility, usability, and independence through their tongue motion and speech. The dTDS detects users’ tongue motion using a magnetic tracer and an array of magnetic sensors embedded in a compact and ergonomic wireless headset. It also captures the users’ voice wirelessly using a small microphone embedded in the same headset. Preliminary evaluation results based on 14 able-bodied subjects and 3 individuals with high level spinal cord injuries at level C3-C5 indicated that the DTDS headset, combined with a commercially available speech recognition (SR) software, can provide end users with significantly higher performance than either unimodal forms based on the tongue motion or speech alone, particularly in completing tasks that require both pointing and text entry.