Mobile and Pervasive
Demo Day Videos and Posters (December 6, 2018)
02. Virtual Coach: Myoelectric Prosthetics for Amputation Recovery
Students: Saksham Chitkara, Varun Joshi, Kevin Wu, Mohamed Razouane
Mentors: Asim Smailagic and Dan Siewiorek
individuals who undergo partial arm amputations, robotic myoelectric
prosthetic devices can allow them to recover a great deal of their arm
and hand functionality. A significant challenge in adapting to a
prosthetic device is learning to use their brain to control the device
effectively and safely.
this project, we will use a Microsoft Kinect and a skin EMG reader to
help provide feedback to users learning to use a prosthetic device.
Participants in this project will develop machine learning tools to
determine what feedback to provide to a user performing physical
therapy exercises to help them learn to use their prosthetic device
correctly. Example exercises are: lifting a light object, lifting a
heavy object, passing an object from one hand to the other, and lifting
a tray. Using the Unity game engine we have developed three
2-dimensional games that users can control using the EMG sleeve, as
well as two virtual reality games.
was collected from 12 volunteers, play-testing these games for 10-20
minutes at a time. Additionally several subjects performed activities
of daily living, such as passing an object from one hand to the other,
while recording simultaneous Kinect and EMG data. For this dataset we
collected 233 instances of an activity of daily living, specifically
the act of lifting a light object from one hand to the other.
these provided datasets, we would like to address the following machine
learning classification tasks. The first is to identify the type of
muscular activity a user is performing given 8 channels of EMG data
(e.g. wrist extension or arm rotation). The second is to identify if a
subject is correctly performing a physical task, such as transferring
an object from one hand to the other, given EMG data coupled with
Kinect 2 depth and RGB data.
05. Augmented Reality Shooting Gaming: Gestural Interface
Students: Tan Li, Yang Zhang
Mentor: Padmanabhan Pillai
reality / mixed reality is an emerging technology that may
revolutionize mobile gaming. The idea is to mix elements of the
real world and the user's movements and actions in the real world,
along with elements of a virtual world to produce an immersive gaming
environment. For example a game may place a virtual monster or
treasure chest at some real location, with which the user can interact.
The virtual elements should be displayed and moved consistently with
the real world surrounding as the user moves. To do this well
will require a reasonably powerful device with cameras, sensors, and
displays, along with cloudlets to do the heavy computational steps.
Although a complete game is beyond the scope of a semester-long
project, several projects can be defined to demonstrate various aspects
of AR gaming. All of the projects will use an Android device as
the front end (maybe with Cardboard, or VR headset adapter), and use a
Linux-based cloudlet for computational offload. OpenGL will be used to
display mixed reality scenes, and a combination of OpenCV and other
visual libraries, along with custom code (C++ or Python) will be used
on the cloudlet. In a mixed reality game, the user will interact
with objects and environment with their hands. We need a way to
detect movements of the arm and interpret actions. Potential demo: user
aims and shoots at AR targets using virtual gun held in hand.
07. Wearable Time Machine
Students: Abinaya Hariharasubramanian, Nirav Atre, Shilpa George
Mentor: Junjue Wang
help people remember. Many things are valuable because they carry part
of the past. People recall emotions and experiences when they see these
objects. It could be a tarnished birthday card which reminds one of an
old friend. It could be a magnet buried inside of a dusty box that
reminds one of a past trip. While people’s memory could fade, digital
records do not. What if there is a wearable time machine that could
help people relive their past experiences?
head-mounted smart glasses (e.g. Google Glass and Microsoft HoloLens),
this project aims to build an object detection-based system that helps
people relive their past experiences. The application displays short
video clips from the past to users through head-mounted smart glasses
when users see special objects. To create such an experience, the
application would record short video segments throughout a day using
the smart glasses. Users or the application itself would mark some
objects as “memory triggers”. The application builds object detectors
for these objects and associates video segments with them. Then, when
the application detects a memory trigger, it retrieves and displays
relevant video segments that are associated with the object to augment
memory recollection and help users relive their past.
project provides you with an opportunity to work with smart glasses,
cloudlets, and deep neural networks (DNNs). You will learn through
practice how to design and build a real-time video streaming and
analysis application using deep learning based object detection.
Depending on your interests, features can be changed. Familiarity with
DNNs is preferred but not required.
08. Cloudlet-based Real-Time Deep Face Swap on Mobile Devices
Students: Ziheng Liao
Mentor: Junjue Wang
developments in deep learning have made it possible to automatically
alter and synthesize realistic images. One interesting application is
Face Swap, which superimposes a person’s face, including facial
movements, to another person in a natural-looking way. Existing
open-source projects leverage autoencoders and generative adversarial
networks to achieve such effects. However, they require significant
computation power and the processing happens offline. The
project aims to build a real-time deep face swap application on mobile
devices by offloading computation to a cloudlet, a small data-center
that is one wireless hop away from the mobile device. The application
would stream the camera feed from the mobile device, perform face swap
on the cloudlet using deep neural networks (DNNs), and transmit the
altered video stream back to the mobile device for display.
project provides you with an opportunity to work with cloudlets and
DNNs. You will learn through practice how to design and build a
real-time video streaming and deep learning-based analysis application.
Depending on your interests, features can be changed. To make existing
face swap DNNs run in real-time can be challenging. Strong system
optimization skills and familiarity with DNNs are preferred.
10A. Visual Search with Dynamic Participation and Fault Tolerance
Students: Karan Dhabalia, Matthew Chu
Mentor: Ziqiang Feng
A child has
gone missing. An AMBER alert is issued. The authority launches a visual
query including face recognition over a network of public surveillance
cameras. The story so far is well understood.
Now, some good
citizens offer to help by making their personal video feeds searchable
-- dash cameras, smart glasses, smartphones, recreational drones, home
surveillance, etc. On one hand, these citizens may “join” the search at
any point of time. On the other hand, they may “leave” the search at
any point of time, when, for example, they enter private zones or run
low on battery. So, these citizens should be able to come and go easily
when the search is in progress.
In this project, you will
develop a system that realizes the above vision. You are encouraged to
base your project on OpenDiamond
(https://github.com/cmusatyalab/opendiamond ) but it is not mandatory.
OpenDiamond is a system for searching non-indexed data on an edge
computing infrastructure. Although this project involves searching
image or video data, you are not required to have prior computer vision
knowledge. OpenDiamond comes with a number of visualfilters (e.g., RGB
histogram, SIFT matching, DNN) that you can reuse off-the-shelf.
What you will learn:
What you need to already know:
- Formalize the design requirements of a system from a motivation
- Design an execution model that can especially facilitates the required agility
- Using VM technologies (e.g., Docker container) in a larger system
- Tradeoff between data shipping and command shipping
is variant A of the project described above. This project focuses on
de-coupling edge nodes' participation time from a search's life time.
Specifically, different edge nodes may join or leave the search at
arbitrary times. The front-end should display a stream of search
results without any interruption whenever new nodes join the search or
a node stops sharing data.
- Programming with Python, JSON, XML, etc.
- Basic programming with networked systems (e.g., TCP sockets, Flask)
- Concepts about remote procedure calls (RPC)
10B. A Visual Search Platform with Heterogeneous Edge Nodes
Students: Roger Iyengar, Chelsea Hu
Mentor: Ziqiang Feng
This is variant B of the above project.
project focuses on handling the heterogeneity of participating edge
nodes. While some powerful edge nodes (e.g., dash camera with an
on-vehicle cloudlet) may be able to run expensive search pipelines
(e.g., including a DNN), other weaker edge nodes (e.g., drones) may
not. The students need to develop a systematic and principled way such
that all edge devices can contribute to the same search, but possibly
undertaking different processing onboard.
Last updated 2019-01-24 by Satya