We describe the architecture and prototype implementation of an assistive system based on Google Glass devices for users in cognitive decline. It combines the first-person image capture and sensing capabilities of Glass with remote processing to perform real-time scene interpretation. The system architecture is multi-tiered. It offers tight end-to-end latency bounds on compute-intensive operations, while addressing concerns such as limited battery capacity and limited processing capability of wearable devices. The system gracefully degrades services in the face of network failures and unavailability of distant architectural tiers.
Zhuo Chen is a PhD candidate at Carnegie Mellon University, Computer Science Department, working with Professor Mahadev Satyanarayanan. He received his BE from Tsinghua University in 2012. During his senior year at college, he also worked as an intern at Microsoft Research Asia, Wireless and Networking Group. His main research interest lies in Distributed Systems, Mobile Computing, and the application of Computer Vision in such context. Specifically, he explores how the effortless video capture of smart glasses, such as Google Glass, can benefit people with the Cloudlet infrastructure.