More mobility for everyone

Smartphones have revolutionized life for most people, but many of their apps exclude with limited or no vision. Researchers are working on ways to make mobile devices an ‘all-access’ pass for the sight-impaired

Chieko Asakawa, a researcher at IBM, navigates CMU's campus using NavCog.

Here is a uniquely 21st-century problem that blind people face: They hear someone say, “Hello” and retort with a “hello.” Then the first speaker trails off into what seems like a non sequitur, and the blind pedestrian awkwardly realizes he or she was answering a cell phone.

“It’s a very real scenario,” says Chieko Asakawa, a researcher at IBM, who has been blind since age 14. “Often a person says ‘hi’ and I say ‘hi’ back and colleague tells me he was talking on the phone. Socialization is a very big challenge.”

But Asakawa, a veteran IBM researcher whose past projects have included a Braille word processor and a talking Web browser, has a solution—one that relies on the same technology that begat the dilemma.

Asakawa, an IBM research fellow, is currently the IBM Distinguished Service Professor in CMU’s Robotics Institute. She and her collaborators, Kris Kitani of the Robotics Institute and Jeffrey Bigham of the Human-Computer Interaction Institute, imagine a time when a blind person will plug into a smart phone app that connects to a sensor signal, which will guide them step by step, literally, through a public space. The same Siri-like voice that will someday be able to tell them to “walk three steps and open the door on the left” also might then identify passersby via facial recognition software. It will even say if the acquaintance seems happy or sad, and whether or not they’re holding a phone to their ear.

Called NavCog, it’s one of several projects under development by CMU and IBM researchers that seeks to use personal technology to help people with sight impairments. From apps to helper robots, it’s the kind of technology has the potential to remold almost every aspect of the lives of the blind.

“Blind people hope to gain much from well-designed and robustly implemented technology,” says M. Bernardine Dias, an associate research professor at the Robotics Institute. “In many situations, technology can play a critical role in enhancing independence, safety, and access to new opportunities for blind people and people with visual impairments and other disabilities.”

Asakawa says that her current work represents a leap into the “real world.” “My [past] research focus has been about information on the net,” she says. “Now, I am thinking, ‘How can I get to the classroom, the post office, the museum?’”

The NavCog project on which she is working has spread 250 Bluetooth signal emitters called beacons—white squares a bit smaller than the average smoke detector—through three School of Computer Science buildings: Newell-Simon Hall, the Gates Center for Computer Science and Wean Hall. The signal currently offers directions around campus. You can plug in, choose start and end points, and a voice guides you. “It’s like a navigation system that makes both outdoor and indoor navigation seamless, for humans,” Asakawa says. Her team hopes that someday, major foot-traffic areas—like airports, bus stations, malls and concert venues—are dotted with beacons to make areas more accessible for people with limited or no vision. They hope to add other facets, such as facial recognition, to an overall smart phone package for the sight impaired, allowing an easier pedestrian experience.

Kris Kitani, a collaborator on NavCog, has developed another smart phone app called EdgeSonic, which he describes “audio Braille for images.” EdgeSonic can take an image captured by a smart phone and turn it into a crude representation of the most pronounced lines in the photo. The user can then trace her finger along the phone’s image and it will create differing clicks and bleeps as she moves along the edges of the objects and outside of it. It will be an audio language that reads shapes, essentially. The blind person could photograph a table or shelf and get an audio map of the objects on it. EdgeSonic also has less utilitarian uses, Kitani says. The user could snap a photo of a Christmas card and feel along the outline of a tree, “hearing” the parameters of the object.

Dias and her research group, TechBridgeWorld, are exploring ways to bring the power of smartphones and other mobile devices to populations underserved by technology—including the blind and visually impaired. Many are from developing countries, where almost 90 percent of visually impaired people reside, according to the World Health Organization. (The disparity is due to lack of access to medical care, particularly preventative care, according to the WHO.)

TechBridgeWorld’s first project was developing computerized Braille tutors for the Mathru School for the Blind in India. The first version of the tutor connected with a computer, while the second was battery-powered and its own on-board computing. Easy to use and transportable, the tutors have now been provided to organizations serving the visually impaired in six other countries.

More recent TechBridgeWorld projects include NavPal (not to be confused with NavCog), a smartphone app that provides navigational assistance to visually impaired adults as they move around unfamiliar indoor and outdoor environments, as well as Assistive Robots for Blind Travelers, which explores how robots may be able to help users with limited vision to safely move around a busy urban environment. The latter project is deploying a Baxter Research robot named “Rathu” (which means “red” in Sinhalese) that’s been specially programmed to assist blind travelers. With its long, multi-jointed arms and complex sensors, Rathu can differentiate between objects such as bus passes and credit cards and hand them to the blind person. Dias foresees a time when robots such as Rathu will be able to trace a map of a room along a person’s hand. She and her team are currently working to recognize users and incorporate past experiences into how it interacts with them. Dias says she “envision[s] assistive robots being available at key locations, such as transit locations, in future smart cities.”

Assistive technology also will offer social ease to those with sight impairments, according to Aaron Steinfeld, an associate research professor at the Robotics Institute involved with TechBridgeWorld. People with disabilities are sometimes reluctant to ask for help, wanting to stay independent and avoid feeling like burdens. “There’s less social resistance when asking a robot to repeat a gesture,” Steineld says, “because robots are inherently patient and compliant.”

For More Information: 

Jason Togyer | 412-268-8721 | jt3y@cs.cmu.edu