firstname.lastname@example.org noon, Friday September 2 with the phrase Mobile Computing Class Projects in the subject line. We will form teams taking into account their ranking. We will notify you of the team pairings before the Tuesday September 6 class so you can meet your teammates and set up your first meetings.
||Wearable Cognitive Assistant for Automatic External Defibrillators (AED)
|| (a) Hongkun Leng, Haodong Liu, Yuqi Liu
(b) Jineet Doshi, Toby Li, Rui Silva
Privacy Mediator for Audio Data
|| Ankit Jain, Rajat Pandey, Ayushi Singh
||Visual Computing on Small Form-factor Cloudlets||
||Hub for Internet of Things||
Ken Ling, Lehao Sun, Mengjin Yan
||Rehabilitation with Myoelectric Prosthetics for Amputation Recovery
||Low-Latency Cloudlet-Based Pokemon Go||
||Interactive Rehabilitation Device|| Eric Markvicka, Tianshi Li
||Using Wearable Sensors and Data Analysis to Detect Changes in the Health Status of Heart Disease Patients|
||Combining a Variety of Sensors for User Oriented Internet of Things|| Kyuin Lee, Raghu Mulukutla, Qian Yang
||Patient Motion Detection|
||Efficient Large File Sharing on Cloudlets|| Arushi Grover, Preeti Murthy, Prathi Shastry
Using an Automatic External Defibrillator (AED) is a time-critical task. If done correctly, it can save many lives. However, there is currently no easy way to guide a novice user through the procedure without on-site support from trained personnel. But Wearable Cognitive Assistants can change this. With wearable devices like Google Glass, it is possible to continuously capture what the user is looking at. An assistance system for tasks like using an AED, can be built on top of this such that 1) The system tries to understand the user’s progress with computer vision techniques; 2) The cognitive assistant provides step-by- step guidance to the user; and 3) The system gives user feedback based on progress. The video captured by Glass will be streamed to a Cloudlet and processed in real-time.
This project requires some computer vision background (if you do not know anything, you can learn along the way). We have a Gabriel platform that takes care of communications between mobile clients and cloudlets. You'll mainly be using python to program cognitive processing for the AED. Small customization for Android client may also be needed. It also involves Android programming (on Google Glass). The real-time video transmission part is already built and open-sourced.
Consumer acceptance of IoT deployment is shadowed by privacy concerns. Users lack control over raw data that is directly streamed from sensors to the cloud. Current cloud-based IoT architecture does not have a clean method to let users filter and retain their sensitive data. A vision of using cloudlets to preserve privacy has been proposed. Imagine the following scenario in the future. You're in a room talking with friends. Microphones in your smartphones and security cameras are recording your conversation and sending them to the cloud for analysis. Although you rely on cloud audio analysis services to automatically take note and set up reminders from conversations, you want to exclude sensitive information, for example, salaries and social security numbers, from raw audio data that is uploaded. You leverage cloudlets as privacy mediators to preserve your privacy. Audio data is transmitted to a trusted cloudlet before going into the cloud. On the cloudlet, audio analysis is performed in real-time to filter out sensitive information. For example, the cloudlet changes the audio containing the email password you said during a conversation into beeping.
In this project, you'll build such a privacy mediator for audio data using cloudlets. We already have a framework named Gabriel that takes care of data transmissions between mobile clients and cloudlets. You'll focus on audio data analysis on cloudlets. There are a few widely-used open-source automatic speech recognition frameworks you can leverage (CMU Sphinx, Kaldi, etc).
A promising application of cloudlets is to support rich, interactive, visual applications in a mobile setting. For computer vision applications, such cloudlets typically need to be very beefy, often consisting of multiple desktop-class machines with discrete graphics cards. Such cloudlets tend to be physically large and power-hungry. At the other end of the spectrum are small form-factor cloudlets that are physically small and low power. These can for example be incorporated into wireless access points, or deployed in remote locations, powered by a small solar panel. Such small form-factor cloudlets are limited in computing capability: they may have 1-4 computing cores, and no discrete graphics cards, though integrated GPUs may be available.
Can such small cloudlets be useful for visual computing applications? This project will demonstrate a computer vision / video analytics application using a small cloudlet platform. OpenCL will be used to parallelize computation across available cores and integrated GPUs. The actual application may be newly created for this project, or it can be a port / reimplementation of an existing application that runs on large, powerful cloudlets.
One likely deployment model for Internet of Things is to have centralized hubs that can offer devices network connections, check for firmware or software updates, and monitor traffic for anomalous behaviors. This project seeks to develop new ways of adding new devices to this hub in a simple and secure manner, as well as offering new kinds of services, such as linking different devices together or doing simple kinds of end-user programming.
For individuals who undergo partial arm amputations, robotic myoelectric prosthetic devices can allow them to recover a great deal of their arm and hand functionality. A significant challenge in adapting to a prosthetic device is learning to use their brain to control the device effectively and safely. In this project, we will use a Microsoft Kinect and a skin EMG reader to help provide feedback to users learning to use a prosthetic device. Participants in this project will develop machine learning tools to determine what feedback to provide to a user performing physical therapy exercises to help them learn to use their prosthetic device correctly. Example exercises are: lifting a light object, lifting a heavy object, lifting a tray, and pouring from a glass jug. We collected Kinect data from 40 subjects and EMG data collection has started and a dataset will be completed by the end of the third week of class.
Pokemon Go has been a very popular game since this summer. In this project, we are going to implement a Pokemon-Go-like game based on cloudlet. This game will include many great features that Pokemon Go already has (e.g. simple augmented reality, real-world-location-based gaming), and some new features enabled by cloudlet (e.g. low-latency interaction between players). For the features that Pokemon Go already has, we aim at implementing their simplified version to make the game functional. For the new features, we aim at designing and implementing them in a way that showcases cloudlet. We can also measure the performance difference of these new features (e.g. interaction latency between players) between cloudlet and cloud.
This project will use a wearable biomonitoring device that is adhesively mounted to the hand to estimate position and monitor the user’s heart rate and blood oxygen saturation. The device will be used for stroke rehabilitation to understand when the hand becomes impeded due to muscle stiffening. Specifically, the project will aim to answer the following questions: 1) What is the position of the hand when muscle stiffening occurs, 2) what motion of the hand initiated muscle stiffening, and 3) how was the muscle stiffening alleviated. In addition, the user’s heart rate and blood oxygen saturation will be monitored at the fingertip. For this project, data from the wearable device will be transmitted to a smart phone via Bluetooth link. The cloudlet infrastructure will be used to off-load data processing and storage. An interactive user interface will display relevant biomonitoring signals and suggests ways to prevent or alleviate muscle stiffening.
We aim to develop a system of sensors and data analysis software that alerts clinicians to patient decline before she reaches the point that hospitalization is called for. By applying modern analytics to those new and potentially voluminous sensor data streams, clinicians can make good use of the data with only a small increase in workload. Early intervention requires careful monitoring of the patient's vital signs. We have collected data and also used appropriate datasets for heart rate, ECG, blood pressure, activity and we are in the process of collecting some sleep data as well. We have used wearable sensors platform such as Apple Watch and Zephyr chest band. For example, heart patients may not show symptoms of heart failure at rest, but they may show them on exertion.
Peak oxygen consumption (peak VO2) of patients is an index of the functional capacity of the heart. Traditional tests to assess patients require directly measuring the peak VO 2 levels while exercising, which requires expensive gas analyzers. Instead, patients can perform simple exercises to achieve peak VO and their performance can be correlated with how their disease is progressing. One such exercise is the Six Minute Walk Test (6MWT). We intend to use externally wearable sensors such as the Apple Watch to measure a patients performance in 6MWT. The data collected will be used to train predictive models using Machine Learning techniques. Remotely monitoring progress of patients using wearable devices and the Six Minute Walk Test will reduce time and costs of monitoring heart patient recovery. The peak VO 2 of a patient provide useful information for prognosis of heart failures. The project will deal with some other use cases as well.
In the Internet of Things (IoT) applications are based on large amounts of sensor data and making that data usable to everyday persons is becoming a challenge as well. The project uses context-aware and Internet of Things computing technology to aid care coordinators in keeping their patients healthy, happy, independent and safe in their respective homes. Taking care of elderly persons is becoming a real challenge in many countries. The system will (1) allow to view and add their patients’ information, (2) provide some data analytics, (3) gather medical and social/emotional data to provide a holistic view of the health status of patients, and (4) provide alerts and notifications when a patient deviates from their baseline health. We combine mobile and stationary sensors, as well as EMA (Ecological Momentary Assessment) surveys for parameters that cannot be sensed (e.g., social activity, mood). We will aim to model people’s sleep patterns, physical activities, stress levels and social activities, showing end users details of their own behaviors and offering the community aggregated summaries. These technologies should enable self-monitoring and sharing of progress with healthcare providers.
Cognitively impaired and impulsive patients pose a unique challenge to fall prevention as these patients are not receptive to the standard fall risk assessment, prevention, and teaching interventions. There is little in the literature specific to fall prevention interventions in these types of patients.
We would like the students to:
- Develop possible solutions using technology that would indicate patient movement off/out of the bed which is a precursor to a fall.
- Utilize innovative technology to alert staff or deter patient from continuing in movement of exiting a bed.
Typical case scenario: The patient is a 63 year old male who has a traumatic brain injury. His cognitive impairment has resulted in short term memory deficits and impulsive behaviors. The nurse has just been in the room providing the patient with care, ensured that personal items and call bell are within reach, and asked the patient if he needs anything else. The patient denies any further needs. A bed alarm is activated and in use. (Current bed alarms alarm when the patient’s body weight is ‘off’ the bed alarm/ sensor mat.) Prior to leaving the room, the nurse reminds the patient to call for assistance if he needs anything further. The patient again responds affirmatively, demonstrating understanding. However, with cognitively impaired patients, the patient may understand in that particular moment, but not recall this instruction 5 minutes later. Fifteen minutes after leaving the room, the bed alarm is activated and alarming. The nurse enters the room to find the patient on the floor next to the bed. The patient states he wanted to go to the bathroom. He did not remember to call for assistance prior to this activity.
Charge: The task would be for the students to innovate and develop a multi-modality alert for detecting patient movement that indicates the patient is attempting a motion that is a precursor to getting out of bed. For example, in the case scenario, the patient sat up, tried to stand, and fell. The bed alarm signaled this movement, but is not a proactive solution as the patient’s body has already exited the bed. Design an alert system triggered by patient movement that is a precursor to this shift in body weight.
The Coda Distributed File System (Coda) has multiple features that make it desirable in poorly connected environments. By its aggressive, persistent, whole-file caching strategy and ability to continue read/write operation even when the Coda servers are unreachable it makes it very well suited to provide file services for cloudlets that have possibly intermittend connectivity which are available close to a mobile end user. However with several modern workloads such as MapReduce or video stream analysis that work with large, append only type files, the whole file caching strategy makes it inefficient to work with large files because we possibly send a multi-GB file back to the server multiple times. Several things need to improve for Coda to tackle such large file more effectively. First of all, in the existing design Coda has no insight into individual read and write operations. By adding support for FUSE either directly or through a proxy process it would be possible to observe which parts of a file are updated or accessed. Secondary there has to be some sort of immutable file storage on the server that allows clients to fetch data from an older version of a file (to be able to maintain the existing open-close consistency model. This could be implemented as an S3(-compatible) storage pool, or possibly a more efficient delta packing format, similar to git packfiles.
From there on, there are many possible directions this work can take. On the write path it now becomes possible to track which parts of a file have actually been modified, and a binary delta can be generated and sent to the server instead of the whole file. On the read path it will be possible to fetch file contents on-demand as they are being accessed, with the caveat that we will probably lose performance and our ability to survive network failures because now only fragments of files may been cached. Implementing this may require some kernel level work, deep knowledge of C/C++ programming and a lot of careful thinking about consistency in a distributed system.