SCSFC

When an operating system or hypervisor is compromised, then the attacker can easily get access to the memory of any lower privileged layers. This means, for example, that an attacker who exploits a security flaw in the OS can compromise private data stored in the processes of other users on the system, even if those processes don’t have security flaws themselves.  In this talk, I will discuss Iso-X, a hardware-supported framework that provides isolation for security-critical pieces of an application such that they can execute securely even in the presence of compromised system software. Isolation in Iso-X is achieved by creating and dynamically managing compartments to host critical fragments of code and associated data. Iso-X provides fine-grained isolation at the memory-page level, flexible allocation of memory, and a low-complexity, hardware-only trusted computing base.

With the recent release of Intel’s Software Guard Extensions (SGX), isolated execution is receiving renewed interest from the research community. The high-level principals behind Iso-X and SGX are the same, meaning that the information presented in this talk will familiarize the listeners with the concepts they need in order to understand isolated execution in general.

Dr. Ryan Riley is an Associate Professor of Computer Science at Qatar University. He received his Ph.D from Purdue University in 2009 under the direction of Dongyan Xu and Xuxian Jiang. Although his first love is teaching, he also enjoys research in a variety of areas related to operating systems, computer architecture, and most of all security. You can find more information about him at his website, https://vsecurity.info/

Facuty Host: Guy Blelloch
Qatar

The explosion of clinical data provides an exciting new opportunity to use machine learning to discover new and impactful clinical information. Among the questions that can be addressed are establishing the value of treatments and interventions in heterogeneous patient populations, creating risk stratification for clinical endpoints, and investigating the benefit of specific practices or behaviors. However, there are many challenges to overcome. First, clinical data are noisy, sparse, and irregularly sampled. Second, many clinical endpoints (e.g., the time of disease onset) are ambiguous, resulting in ill-defined prediction targets.

I tackle these problems by learning abstractions that generalize across applications despite missing and noisy data. My work spans coded records from administrative staff, vital signs recorded by monitors, lab results from ordered tests, notes taken by clinical staff, and accelerometer signals from wearable monitors. The learned representations capture higher-level structure and dependencies between multi-modal time series data and multiple time-varying targets. I focus on learning techniques that transform diverse data modalities into a consistent intermediate that improves prediction in clinical investigation.

In this talk, I will present work that addresses the problem of learning good representations using clinical data. I will discuss the need for practical, evidence-based medicine, and the challenges of creating multi-modal representations for prediction targets varying both spatially and temporally. I will present work using electronic medical records for over 30,000 intensive care patients from the MIMIC-III dataset to predict both mortality and clinical interventions. To our knowledge, classification results on these task are better than those of previous work. Moreover, the learned representations hold intuitive meaning - as topics inferred from narrative notes, and as latent autoregressive states over vital signs. I will also present work from a non-clinical setting that uses non-invasive wearable data to detect harmful vocal patterns and their pathological physiology. I present two sets of results in this area: 1) it is possible to detect pathological anatomy from the ambulatory signal, and 2) it is possible to detect the impact of therapy on vocal behaviors.

Marzyeh Ghassemi is a PhD student in the Clinical Decision Making Group (MEDG) at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) supervised by Dr. Peter Szolovits. Her research focuses on machine learning with clinical data to predict and stratify relevant human risks, encompassing unsupervised learning, supervised learning, structured prediction. Marzyeh’s work has been applied to estimating the physiological state of patients during critical illnesses, modelling the need for a clinical intervention, and diagnosing phonotraumatic voice disorders from wearable sensor data.

While at MIT, Marzyeh was a joint Microsoft Research/Product intern at MSR-NE, and co-organized the NIPS 2016 Machine Learning for Healthcare (ML4HC) workshop. Her work has appeared in KDD, AAAI, IEEE TBME, MLHC, JAMIA, and AMIA-CRI. Prior to MIT, Marzyeh received B.S. degrees in computer science and electrical engineering as a Goldwater Scholar at New Mexico State University, worked at Intel Corporation, and received an MSc. degree in biomedical engineering from Oxford University as a Marshall Scholar.

Machine Learning

In this talk I will discuss and demonstrate how to use interactive teaching methods to support diverse sets of students in introductory computer science courses. I'll give a short sample lecture on random numbers and Monte Carlo methods which demonstrates the use of live coding and in-class exercises woven into a lecture format. I'll then briefly discuss the work I've done on supporting personalized learning at scale, and how this work might be extended in the future.

Kelly Rivers is a PhD candidate at Carnegie Mellon University in the Human-Computer Interaction Institute, where she is advised by Ken Koedinger. She specializes in teaching CS0 and CS1 courses at large scale, and works to incorporate her research into her classes. This research focuses on developing data-driven methods for generating hints and feedback for students who are learning how to code, and draws inspiration from the fields of intelligent tutoring systems, program transformations, and learning science theory. Kelly graduated from Carnegie Mellon with a B.S. in Mathematics and Computer Science in 2011 and plans to defend her thesis in the summer of 2017.

Faculty Host: David Andersen
Computer Science / Institute for Software Research

Networks are a fundamental model of complex systems in biology, neuroscience, engineering, and social science.  Networks are typically described by lower-order connectivity patterns that are captured at the level of individual nodes and edges.  However, higher-order connectivity patterns captured by small subgraphs, or network motifs, describe the fundamental structures that control and mediate the behavior of many complex systems.  In this talk, I will discuss several higher-order analyses based on higher-order connectivity patterns that I have developed to gain new insights into network data.  Specifically, I will introduce a motif-based clustering methodology, a generalization of the classical network clustering coefficient, and a formalism for temporal motifs to study temporal networks.  I will also show applications of higher-order analysis in several domains including ecology, biology, transportation, neuroscience, social networks, and human communication.

Austin Benson is a PhD candidate at Stanford University in the Institute for Computational and Mathematical Engineering where he is advised by Professor Jure Leskovec of the Computer Science Department.  His research focuses on developing data-driven methods for understanding complex systems and behavior. Broadly, his research spans the areas of network science, applied machine learning, tensor and matrix computations, and computational social science.  Before Stanford, he completed undergraduate degrees in Computer Science and Applied Mathematics at the University of California, Berkeley.  Outside of the university, he has spent summers interning at Google (four times), Sandia National Laboratories, and HP Labs.

Machine Learning/Computer Science

Creative language—the sort found in novels, film, and comics—contains a wide range of linguistic phenomena, from phrasal and sentential syntactic complexity to high-level discourse structures such as narrative and character arcs. In this talk, I explore how we can use deep learning to understand, generate, and answer questions about creative language. I begin by presenting deep neural network models for two tasks involving creative language understanding: 1) modeling dynamic relationships between fictional characters in novels, for which our models achieve higher interpretability and accuracy than existing work; and 2) predicting dialogue and artwork from comic book panels, in which we demonstrate that even state-of-the-art deep models struggle on problems that require commonsense reasoning. Next, I introduce deep models that outperform all but the best human players on quiz bowl, a trivia game that contains many questions about creative language. Shifting to ongoing work, I describe a neural language generation method that disentangles the content of a novel (i.e., the information or story it conveys) from the style in which it is written. Finally, I conclude by integrating my work on deep learning, creative language, and question answering into a future research plan to build conversational agents that are both engaging and useful.

Mohit Iyyer is a fifth year Ph.D. student in the Department of Computer Science at the University of Maryland, College Park, advised by Jordan Boyd-Graber and Hal Daumé III. His research interests lie at the intersection of deep learning and natural language processing. More specifically, he focuses on designing deep neural networks for both traditional NLP tasks (e.g., question answering, sentiment analysis) and new problems that involve understanding creative language. He has interned at MetaMind and Microsoft Research, and his research has won a best paper award at NAACL 2016 and a best demonstration award at NIPS 2015.

Language Technologies

As robots become integrated into human environments, they increasingly interact directly with people. This is particularly true for assistive robots, which help people through social interactions (like tutoring) or physical interactions (like preparing a meal). Developing effective human-robot interactions in these cases requires a multidisciplinary approach involving both fundamental algorithms from robotics and insights from cognitive science. My research brings together these two areas to extend the science of human-robot interaction, with a particular focus on assistive robotics. In the process of developing cognitively-inspired algorithms for robot behavior, I seek to answer fundamental questions about human-robot interaction: what makes a robot appear intelligent? How can robots communicate their internal states to human partners to improve their ability to collaborate? Vice versa, how can robots "read" human behaviors that reveal people's goals, intentions, and difficulties, to identify where assistance is required?

In this talk, I describe my vision for robots that collaborate with humans on complex tasks by leveraging natural, intuitive human behaviors. I explain how models of human attention, drawn from cognitive science, can help select robot behaviors that improve human performance on a collaborative task. I detail my work on algorithms that predict people's mental states based on their eye gaze and provide assistance in response to those predictions. And I show how breaking the seamlessness of an interaction can make robots appear smarter. Throughout the talk, I will describe how techniques and knowledge from cognitive science help us develop robot algorithms that lead to more effective interactions between people and their robot partners

Henny Admoni is a postdoctoral fellow at the Robotics Institute at Carnegie Mellon University, where she investigates human-robot interaction with Siddhartha Srinivasa in the Personal Robotics Lab. Henny develops and studies intelligent robots that improve people's lives by providing assistance through social and physical interactions. She studies how nonverbal communication, such as eye gaze and pointing, can improve assistive interactions by revealing underlying human intentions and improving human-robot communication.

Henny completed her PhD in Computer Science at Yale University with Professor Brian Scassellati. Her PhD dissertation was about modeling the complex dynamics of nonverbal behavior for socially assistive human-robot interaction. Henny holds an MS in Computer Science from Yale University, and a BA/MA joint degree in Computer Science from Wesleyan University. Henny's scholarship has been recognized with awards such as the NSF Graduate Research Fellowship, the Google Anita Borg Memorial Scholarship, and the Palantir Women in Technology Scholarship.

Faculty Host: Illah Nourbakhsh
Robotics

The top Sustainable Development Goals of United Nations, including poverty alleviation, literacy, and gender equality, are closely tied to the problem of exclusion from core economic, social, and cultural infrastructures. As a potential tool for sustainable development, technology has the responsibility to make these infrastructures more inclusive. However, to date, many of the world’s biggest technological advances have primarily benefited only a small fraction of the developed world. The goal of my research is to leverage ethnographic methods to understand the underserved populations in low-income regions, and design and develop appropriate technologies to bring sustainable positive change in their lives.

In this talk, I will describe my general research approach that combines ethnography and design. I will focus on two projects to explain how understanding the communities through a deep ethnography can result in effective technologies. The first is “Suhrid”, an accessible mobile phone interface for a low-literate rickshaw driver community.  The second is “Protibadi”, a mobile phone application for women to combat public sexual harassment. Both projects will demonstrate a set of ethnographic tools and techniques for understanding different economic, social, and cultural values of a community and how those can play a crucial role in designing novel technologies. In addition, I will briefly discuss my ongoing work on privacy right, refugee problem, technology repair, and e-waste to show how ethnographic studies have opened up novel spaces for design and other creative interactions mediated by computing technologies. Through these projects, I will also explain how “voice”, which I defined by better access, visibility, and freedom, can empower marginalized communities combat the problem of exclusion, and contribute towards sustainable development.

Syed Ishtiaque Ahmed is a Ph.D. Candidate in the Department of Information Science at Cornell University where he is advised by Prof. Steven J. Jackson. Ishtiaque earned his B.Sc. in 2009 and M.Sc. in 2011, both in Computer Science and Engineering, from Bangladesh University of Engineering and Technology (BUET). He then taught at BUET for two years as a Lecturer, where in 2009, he founded the first Human-Computer Interaction research group in Bangladesh, which he continues to lead. In 2010, Ishtiaque started an open-source digital map making movement in Bangladesh. In 2011, Ishtiaque was awarded the International Fulbright Science and Technology Fellowship for doing his PhD at Cornell University. His PhD research lies in the intersection of Human-Computer Interaction (HCI) and Information and Communication Technologies for Development (ICTD).

He has worked with different marginalized communities in Bangladesh and India including low-literate rickshaw drivers, victims of sexual harassment, mobile phone repairers, garments factory workers, and evicted slum dwellers. He connects ethnography and technology design to address the development-related challenges associated with those communities. In 2016, Ishtiaque co-created the first “Innovation Lab” in Bangladesh. Ishtiaque’s work has been recognized at top HCI and ICTD venues as well as several international news media including The BBC News and The New Scientist. He has also received generous support for his research from National Science Foundation (NSF), Intel Science and Technology Center for Social Computing, OpenStreetMap Foundation, and Microsoft Research, among others.

Human-Computer Interaction

Robots today are confined to operate in relatively simple, controlled environments. One reason for this is that current methods for processing visual data tend to break down when faced with occlusions, viewpoint changes, poor lighting, and other challenging but common situations that occur when robots are placed in the real world. I will show that we can train robots to handle these variations by modeling the causes behind visual appearance changes. If robots can learn how the world changes over time, they can be robust to the types of changes that objects often undergo. I demonstrate this idea in the context of autonomous driving, and I will show how we can use this idea to improve performance for every step of the robotic perception pipeline: object segmentation, tracking, and velocity estimation. I will also present some recent work on learning to manipulate objects, using a similar framework of learning environmental changes. By learning how the environment can change over time, we can enable robots to operate in the complex, cluttered environments of our daily lives.

David Held is a post-doctoral researcher at U.C. Berkeley working with Pieter Abbeel on deep reinforcement learning for robotics. He recently completed his Ph.D. in Computer Science at Stanford University with Sebastian Thrun and Silvio Savarese, where he developed methods for perception for autonomous vehicles. David has also worked as an intern on Google’s self-driving car team. Before Stanford, David was a researcher at the Weizmann Institute, where he worked on building a robotic octopus. He received a B.S. and M.S. in Mechanical Engineering at MIT and an M.S. in Computer Science at Stanford, for which he was awarded the Best Master's Thesis Award from the Computer Science Department.

Faculty Host: Martial Hebert

Massive cancer genomics efforts have been undertaken with the hopes of personalizing cancer therapy by using targeted therapies matched to the genetics of the patient’s tumor rather than cytotoxic drugs that kill all proliferating cells.  In recent “basket” clinical trials, targeted therapies are chosen based on somatic alterations affecting specific pathway genes regardless of the cancer type, e.g. patients with activating mutations in PIK3CA are eligible for treatment with PI3K inhibitors whether they have breast cancer or head and neck cancer. Data from such clinical trials shows that the presence of an “actionable mutation” is not sufficient to predict a clinical response to the corresponding targeted therapy, and it is unclear when a targeted therapeutic with efficacy in one cancer will prove useful in another.

To better model the context dependent role of somatic alterations, we first applied a regularized bilinear regression model to link dysregulation of upstream signaling pathways with altered transcriptional response. We used parallel (phospho)proteomic and mRNA sequencing data across the Cancer Genome Atlas (TCGA) tumor data sets for these models. We then developed a systematic regularized regression analysis to interpret the impact of mutations and copy number events in terms of functional outcomes such as (phospho)protein and transcription factor (TF) activities. Our analysis predicted distinct dysregulated transcriptional regulators downstream of similar somatic alterations in different cancers. We validated the context-specific activity of TFs associated to mutant PIK3CA in model systems. These results have implications for the pancancer use of targeted drugs and potentially for the design of combination therapies.

Dr. Hatice Ulku Osmanbeyoglu is a postdoctoral research associate at Memorial Sloan-Kettering Cancer Center. Her research focuses on developing data-driven computational approaches to understand disease mechanisms in order to assist in the development of personalizing anticancer treatments. She obtained her Ph.D. in Biomedical Informatics from University of Pittsburgh, and holds a Master of Science degree in Electrical and Computer Engineering from Carnegie Mellon University and Master of Science in Bioengineering from University of Pittsburgh. She completed her Bachelor of Science in Computer Engineering from Northeastern University (Summa Cum Laude). She has received multiple awards including the NIH Pathway to Independence Award (K99/R00) and Memorial Sloan-Kettering Postdoctoral Research Award.

Faculty Host: Jian Ma
Computational Biology

An artist sculpting a block of marble, a magician pulling a card from thin air, and a surgeon performing a difficult emergency procedure all highlight the brilliant human ability to manipulate the physical world. And yet, even as these high-skill tasks push the limits of human capability, they remind us of the boundary of human dexterity. In this talk, I will present my past, present, and planned research to allow humans of all skill and ability levels, as well as their robotic counterparts, to accomplish previously impossible tasks.

I will begin with an overview of my research aimed to better enable humans to teleoperate robots under direct control. The ideal direct-control teleoperation system would enable the operator to complete a given task at least as easily as if he or she were to complete the task directly with his or her own hands. My research improves the usability of teleoperation systems through a human-centered design approach. Specifically, I leverage prior knowledge of the human motor and sensory systems to increase the transparency of and presence provided by teleoperation systems. I will then describe my ongoing research, which investigates the use of shared control and shared autonomy in teleoperation. Finally, I will end with my future plans to expand my research to other areas of collaborative and assistive robotics.

Rebecca Pierce Khurshid is a postdoctoral associate in the Interactive Robotics Group at MIT, where she works to enable human-robot teams to achieve more than either humans or robots can achieve alone. Specifically, she is investigating how humans can best teleoperate robots and how varying levels of robot autonomy affect the team’s performance. She arrived at MIT after completing her PhD and master’s degrees in Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. Her doctoral research leveraged previous scientific knowledge of the human sensory-motor system to design interfaces that allowed a human to teleoperate a humanoid robot. She received an NSF Graduate Research Fellowship to support her work. Prior to Penn, she earned her bachelor’s degree in Mechanical Engineering at Johns Hopkins.

Faculty Host: David Wettergreen
Robotics

Pages

Subscribe to SCSFC