Projects

Current Projects

2013-Present: Software Development Tools for Big Data Analytics: While big data analytics continue to grow in popularity among companies and organizations, the analytic implementations are often completed by software developers with little or no formal classroom experience in machine learning or data analysis. As a result, they often adopt tools to help non-experts create analytics efficiently by abstracting away the details of the algorithms. In practice, we find that the tools are fairly rigid in their data formatting and algorithm requirements which may not be appropriate especially for early-stage data exploration. We began our research by following a development team through their two-week data exploration on a novel analytics challenge. We analyzed their interactions with their data and tools and used the results to direct our future work towards creating tools that support collaboration and iteration on data exploration and analytics.

2013-Present: Understanding and Defending Against Attacks on Big Data Analytics: People and organizations collect data to update their beliefs about the world and to make decisions. However, as the amount of data grows, they rely on analytics that mine and analyze 'big data' to make predictions about future events and determine future courses of action. With so much depending on this data, big data analytics are prime targets for subversion by adversaries. However, little is known about what is actually happening to our data. This research focuses on understanding how and where attacks on analytics may happen and how we can build better defenses against them. We aim to answer questions such as what are indicators of adversarial subversion, would we find anything if we looked for it, and how are current organizations and researchers defending against it.

Previous Projects

2012-2013: Human-Robot Intearction for Bossa Nova Robotics:Bossa Nova Robotics is developing commercially viable ballbots (robots that balance on a ball). Ballbots have many features that make them particularly fit for human-robot interaction; namely, that they can be built tall and thin, they are compliant and pushed around in crowded environments, and omni-directional to navigate around people. I designed and developed applications that use the Kinect RGBD camera to detect people and obstacles, dialog with people, and plan paths through the environment. These demonstrations were controlled through the Microsoft gamepad controller or through websites that I created.

2012-2013: Execution Memory for Grounding Interactions: As robots are introduced into human environments for long periods of time, human owners and collaborators will expect them to remember shared events that occur during execution. We define execution memory as the capability of saving interaction event information and recalling it for later use. We divide the problem into four parts: salience filtering of sensor evidence and saving to short term memory, archiving from short to long term memory and caching from long to short term memory, and recalling memories for use in state inference and policy execution. We then provide examples of how execution memory can be used to enhance user experience on current and future robot applications.

2009-2012: CoBot Robots: Beyond the Visitor Companion task, our CoBot robots perform tasks for people in our buildings. My thesis work focused on CoBot modeling the humans in the environment and planning to ask for help to overcome uncertainty and actuation limitations. More information found here, a video history of the project is here, and a demonstration video filmed for the National Science Foundation here.

2009-2010: Visitor Companion Robot: I have been helping to develop a visitor companion robot designed to escort visitors around to their meetings and to provide them information about their meeting hosts and other amenities like getting coffee/water. While the visitor is good at identifying room numbers down the hall, they do not have the knowledge of the building layout to find their way easily. On the other hand, the robot has a map of the building and can plan paths easily between rooms, but may not always be able to localize itself without a lot of sensors. Because the visitor is always near the robot and is better at localization, the robot can ask for help when needed. My work focuses on balancing the performance of the robot with its usability in terms of asking for help.

2008-2009: Asking Questions: I have run a series of studies on how systems can ask questions when it detects it is uncertain of the correct answer and elicit the most accurate responses from people. Subjects were given a task on either email applications, activity recognizers, or robots. They were wold that the learning application might ask questions if it was uncertain of its prediction. I varied the agents' questions along five dimensions, each participant receiving a different combination: uncertainty, low/high level context, amount of context, prediction, and supplemental feature selection. I found the combination of dimensions that maximizes accuracy of user responses and validated it against a combination of the same dimensions that HCI experts suggested.

2007-2008: Dynamic Specialists: Recommender systems use a set of reviewers and advice givers with the goal of providing accurate user-dependent product predictions. In general, these systems assign weights to different reviewers as a function of their similarity to each user. As products are known to be from different domains, a recommender system also considers product domain information in its predictions. As there are few reviews compared to the number of products, it is often hard to set the similarity-based weights as there is not a large enough subset of reviewers who reviewed the same products. It has then been recently suggested that not considering domains will increase the amount of reviewer data and the overall prediction accuracy in a mediated way. However, clearly, if different reviewers are similar to a user in each product domain, then domain-specific predictions could be superior to mediated ones. We consider two advice giver algorithms to provide domain-specific and mediated predictions. We analyze both algorithms using large real data sets to characterize when each is more accurate for users and find that for half of users the domain-specific algorithm gives more accurate predictions while for the other half the mediated algorithm performs better. We provide online user-dependent selection algorithms to pick the best algorithm for each user while the user is requesting reviews for products.

2006-2007: My senior thesis with the Smart Home group was to learn family routines in order to predict reminders before the family forgets something. The reminders would be presented on cell phones to parents.

2004-2007: Kiva is a collaborative tool for students which consists of online asynchronous communication and synchronous meeting rooms. I studied collaborative learning in non-co-located environments (compared to co-located collaboration) in by determining the differences in group communication in both conditions.

2003: Valerie the Roboceptionist (now Tank) is located in Newell-Simon Hall at Carnegie Mellon University. The robots have personalities and stories developed by the Drama Dept and also help visitors find their way to offices in the School of Computer Science. I helped populate the chatbot responses for Valerie.

2002-2004: GRACE and George are robots designed to attend the AAAI and IJCAI conferences for the Robot Challenge. In 2003, I designed the facial expressions the robots used to express their mood (especially frustration) at the conference. In 2004, I developed the direction-giving algorithm that the robots used to help conference attendees find rooms.