Robots are expected to become ubiquitous in the near future, working alongside and with people in everyday environments to provide various societal benefits. In contrast to this broad ranging social vision for robotics applications, evaluations of robots and studies of human-robot interaction have largely focused on more constrained contexts, largely dyadic and small group interactions in laboratories. As a result, we have a limited understanding of how robots are perceived, adopted and supported in open-ended, natural social circumstances in which researchers have little control of the ensuing interactions.

This talk will discuss insights from a series of studies of the design and use of socially assistive robots (SARs) for eldercare aimed at expanding our awareness of the broader cultural, organizational, and societal dynamics that affect the use and consequences of robots outside the laboratory. Our in-home interviews with older adults suggested that existing robot designs reproduce unwanted stereotypes of aging, while naturalistic observation of robot use in a nursing home shows that ongoing labor by various groups of users is needed to produce successful voluntary human-robot interactions. In response to these findings, we are currently engaging in participatory design of robots with older adults and clinicians to provide an opportunity for mutual learning, inspire both sides to think beyond common stereotypes of older adults and robots, and identify non-technical issues of particular concern to clinicians and older adults that may affect long-term robot adoption. These concerns include the fit of robots to the home environments and values of older adults, to the labor practices and clinical needs of care staff, and to the broader healthcare infrastructure (e.g. insurance mechanisms). I

In conclusion, I will discuss ways to address broader organizational and societal issues in the course of robot design and development, working together with potential users and other stakeholders to avoid unwanted consequences and create robust social supports that can cope with the inevitable challenges that emerge when we apply robots in society.

Selma Šabanović is an Associate Professor of Informatics and Cognitive Science at Indiana University, Bloomington, where I founded and direct the R-House Human-Robot Interaction Lab. My work combines the social studies of computing, focusing particularly on the design, use, and consequences of socially interactive and assistive robots in different social and cultural contexts, with research on human-robot interaction (HRI) and social robot design. I spent Summer 2014 as a Visiting Professor at Bielefeld University's Cluster of Excellence in Cognitive Interaction Technology (CITEC). Prior to coming to IUB, I was a lecturer in Stanford University's Program in Science, Technology and Society in 2008/2009, and a visiting scholar at the Intelligent Systems Institute in AIST, Tsukuba, Japan and the Robotics Institute at Carnegie Mellon University in 2005. I was awarded IU’s Outstanding Junior Faculty Award in 2013, and the Trustee’s Teaching Award in 2016. I received my PhD in Science and Technology Studies from Rensselaer Polytechnic Institute in 2007.

Deep learning methods have provided us with remarkably powerful, flexible, and robust solutions in a wide range of passive perception areas: computer vision, speech recognition, and natural language processing. However, active decision making domains such as robotic control present a number of additional challenges, standard supervised learning methods do not extend readily to robotic decision making, where supervision is difficult to obtain. In this talk, I will discuss experimental results that hint at the potential of deep learning to transform robotic decision making and control, present a number of algorithms and models that can allow us to combine expressive, high-capacity deep models with reinforcement learning and optimal control, and describe some of our recent work on scaling up robotic learning through collective learning with multiple robots.

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.

Faculty Host: Sidd Srinivasa

The nervous system is arguably the most sophisticated control system in the known universe, riding at the helm of an equally sophisticated plant. Understanding how the nervous system encodes and processes sensory information, and then computes motor action, therefore, involves understanding a closed loop. However it is often necessary to "isolate" all or part of the nervous system to study it. But there is no guarantee that the brain is "open-loop stable" and in fact there clear cases when it is likely unstable. Here we discuss two problems in which we first close a feedback loop around the brain, and then take steps to perform system identification of the stabilized brain in order to understand its computations.

In 2003, Noah Cowan joined Johns Hopkins University, where he is now an associate professor of mechanical engineering. He directs the Locomotion in Mechanical and Biological Systems (LIMBS) Laboratory. LIMBS Lab conducts experiments and computational analyses on both biological and robotic systems, with a focus on applying concepts from dynamical systems and control theory to garner new insights into the principles that underlie neural computation. Dr. Cowan’s research program was recognized by a Presidential Early Career Award in Science and Engineering (PECASE) in 2010 and a James S. McDonnell Complex Systems Scholar award in 2012, and his teaching and mentorship were recognized by the William H. Huggins Excellence in Teaching Award in 2005 and the Dunn Family Award in 2014.

Faculty Host: Howie Choset

Special Start Time

Over the past decade, DJI has developed several world-leading drone products, turning cutting-edge technologies such as high resolution image transmission, visual odometry, and learning-based object tracking into affordable commercial products. Along with all these technological successes, DJI is exploring innovative ways to make them more accessible. In this talk, Shuo will review some key technologies DJI has developed, then talk about RoboMasters, a robotics competition that uses these technologies to nurture next generation engineers.

Shuo Yang is Director of Intelligent Navigation Technologies and Director of RoboMasters Program at DJI. He obtained B.Eng and M.Phil degrees from Hong Kong University of Science and Technology (HKUST). He is involved in developing flight control and navigation technologies for several DJI flagship products, such as the Inspire 1, Phantom 4 and Matrice 100 drones and the A3 flight controller. He has coauthored 4 academic papers and obtained over 10 US patents.

Faculty Host: Sanjiv Singh

As the target scale of robot operations grows, so too does the challenge of developing software for such systems. It may be difficult, unsafe, or expensive to develop software on enough real-world conditions. Similarly, as the target applications of learning algorithms grow, so too do the challenges of gathering adequate training data. It may be difficult to collect large datasets, label them, or deal with different domains. Simulation has attracted attention as a solution to these problems. To be useful, simulators must have sufficient fidelity and flexibility. For the problem of off-road Lidar scene simulation, existing solutions are either high-fidelity, or flexible. Our work builds a Lidar simulator that is both.

Off-road Lidar simulation is challenging because of Lidar interaction with natural terrain such as vegetation. A hybrid geometric terrain representation, consisting of permeable ellipsoids and surface meshes, has been shown to model Lidar observations well. We propose to add semantic information to the hybrid geometric model, using standard procedures for point cloud segmentation and classification. This allows us to extract terrain primitives, such as trees and shrubs, from data logs. The primitives can then be used to compose unseen scenes to simulate sensor observations in. The advantage over arbitary mesh models of terrain is that the primitives are associated with sensor-realistic models obtained from real data.

A major use of simulators is to develop algorithms. In addition to measuring simulator fidelity at the level of observations, we present an algorithm-dependent risk. We formalize the notion that a good simulator must provide a developer useful feedback even when the algorithm has poor performance, just as real data would. We propose to apply the idea to develop a Lidar scan matching algorithm. In addition, we propose to use the simulator to train a CNN for off-road object recognition. Our handle on all aspects of fidelity will allow us to compare the utility of different simulators for developing algorithms.

Our approach is guided by past work on indoor Lidar simulation, and nonparameteric sensor modeling. Our datasets for training and test come from off-road sites of real-world interest. We expect our work to impact software development for off-road mobile robots, and add to the understanding of simulation in general.

Thesis Committee:
Alonzo Kelly (Chair)
Martial Hebert
Michael Kaess
Peter Corke (Queensland University of Technology)

In recent years, the U.S. educational system has fallen short in training the technology innovators of the future. To do so, we must give students the experience of designing and creating technological artifacts, rather than relegating students to the role of technology consumers, and must provide educators with opportunities and professional development for identifying and supporting their students’ talents. This is especially important for the identification of student talents in computational thinking or engineering design where schools commonly lack educators well versed in those domains. Educational robotics systems are one possible method for providing educators and students with these opportunities.

Our creative robotics program, Arts & Bots, combines craft materials with robotic construction and programming tasks in a manner that encourages complexity such that a wide variety of student talents can surface while permitting integration with non-technical disciplines. This thesis describes our process in developing Arts & Bots as a tool for talent-based learning, which we define as leveraging understanding of a student’s talent areas to encourage and motivate learning. We look at this process and the outcomes of two multi-year Arts & Bots studies: the three year Arts & Bots Pioneers study, where we integrated Arts & Bots into non-technical classes; and the four year Arts & Bots Math-Science Partnership, where we further refined Arts & Bots as a tool for talent identification.
This thesis outlines our development of a teacher training model and case studies of two teacher-designed, Arts & Bots classroom projects. We present a taxonomy for novice-built robots along with other tools which support the identification of engineering design and computational thinking talent by non-technical teachers. Finally we describe our development of a suite of evaluation tools for assessing the outcomes of the Arts & Bots program along with our findings from that evaluation.

Thesis Committee:
Illah Nourbakhsh (Chair)
Jack Mostow
Aaron Steinfeld
Mitchel Resnick (MIT Media Lab)

Understanding the temporal dimension of images is a fundamental part of computer vision. Humans are able to interpret how the entities in an image will change over time. However, it has only been relatively recently that researchers have focused on visual forecasting—getting machines to anticipate events in the visual world before they actually happen. This aspect of vision has many practical implications in tasks ranging from human-computer interaction to anomaly detection. In addition, temporal prediction can serve as a task for representation learning, useful for various other recognition problems.

In this thesis, we focus on visual forecasting that is data-driven, self-supervised, and relies on little to no explicit semantic information. Towards this goal, we explore prediction at different timeframes. We first consider predicting instantaneous pixel motion---optical flow. We apply convolutional neural networks to predict optical flow in static images. We then extend this idea to a longer timeframe, generalizing to pixel trajectory prediction in space-time. We incorporate models such as Variational Autoencoders to generate future possible motions in the scene. After this, we consider a mid-level element approach to forecasting. By combining a Markovian reasoning framework with an intermediate representation, we are able to forecast events over longer timescales.

In proposed work, we aim to create a model of visual forecasting that utilizes a structured representation of an image for reasoning. Specifically, instead of directly predicting events in a low-level feature space such as pixels or motion, we forecast events in a higher level representation that is still visually meaningful. This approach confers a number of advantages. It is not restricted by explicit timescales like motion-based approaches, and unlike direct pixel-based approaches predictions are less likely to "fall off" the manifold of the true visual world.

Thesis Committee:
Martial Hebert (Co-chair)
Abhinav Gupta (Co-chair)
Ruslan Salakhutdinov
David Forsyth (University of Illinois at Urbana-Champaign)

We expect legged robots to be highly mobile. Human walking and running can execute quick changes in speed and direction, even on non-flat ground. Indeed, analysis of simplified models shows that these quantities can be tightly controlled by adjusting the leg placement between steps, and that leg placement can also compensate for disturbances including changes in the ground height. However, to date, legged robots do not exhibit this level of agility or robustness, nor is it well understood what prevents them from attaining this performance. This thesis begins to bridge the gap between the theoretical motions of simplified models and the implementation of agile behaviors on legged robots.

The state of the art allows room for improvement at the level of the simplified model, at the level of hardware demonstration, and at the level of theoretical understanding of applying the simplified model to a real system. We make progress on each of these facets of the problem as we work towards leveraging theory from the simplified model to generate effective control for locomotion on robots. In particular, spring mass theory has identified deadbeat stability for planar running, but it must be formulated in 3D to be applicable to a real system. We extend this behavior to 3D, adding deadbeat steering to the tracking of apex height on unobserved terrain. Running robots have yet to demonstrate the agile and robust behavior that the spring mass model describes; existing implementations do not target the deadbeat behavior. We apply state of the art control techniques to map the deadbeat stabilized planar running onto our robot ATRIAS, and we successfully demonstrate tight tracking of commanded velocities and robustness to unobserved changes in ground height. Despite this empirical proof of concept, it remains unclear how exactly the targeted behavior of the simplified model affects the closed loop behavior of the full order system. There are additional degrees of freedom which affect the tracking of original goals and additional layers of control which may offer other sources of stability. Furthermore, the hardware introduces perturbations and uncertainties which detract from the nominal performance of the full order model. To answer these questions, we formulate a framework founded on linear theory, and we use it to examine the contributions of each component of the control and to quantify the expected effects of the disturbances we encounter. This analysis reveals insights for effective control strategies for legged locomotion and presents a tool for scientific iteration between theory-based control design and evidence-based revision of the underlying theory.

Thesis Committee:
Hartmut Geyer (Chair)
Christopher G. Atkeson
Koushil Sreenath
Jerry Pratt (Institute for Human and Machine Cognition)

Copy of Thesis Document

Autonomous quadrotors will soon play a major role in search-and-rescue and remote-inspection missions, where a fast response is crucial. Quadrotors have the potential to navigate quickly through unstructured environments, enter and exit buildings through narrow gaps, and fly through collapsed buildings. However, their speed and maneuverability are still far from those of birds. Indeed, agile navigation through unknown, indoor environments poses a number of challenges for robotics research in terms of perception, state estimation, planning, and control. In this talk, I will give an overview of my research activities on visual navigation of quadrotors, from slow navigation (using standard frame-based cameras) to agile flight (using active vision and event-based cameras). Topics covered will be: visual inertial state estimation, monocular dense reconstruction, active vision and control, event-based vision.

Davide Scaramuzza (born in 1980, Italian) is Assistant Professor of Robotics at the University of Zurich, where he does research at the intersection of robotics and computer vision. He did his PhD in robotics and computer vision at ETH Zurich (with Roland Siegwart) and a postdoc at the University of Pennsylvania (with Vijay Kumar and Kostas Daniilidis).  From 2009 to 2012, he led the European project “sFly”, which introduced the world’s first autonomous navigation of micro drones using visual-inertial sensors and onboard computing. For his research contributions, he was awarded the IEEE Robotics and Automation Society Early Career Award, the SNSF-ERC Starting Grant ($1.5m, equivalent of NSF Career Award), and a Google Faculty Research Award.

In 2015, his lab received funding from the DARPA FLA Program, a three-year project dedicated to agile navigation of vision-controlled drones in unstructured and cluttered environments. He coauthored the book “Introduction to Autonomous Mobile Robots” (published by MIT Press) and more than 80 papers on robotics and perception. In 2015, he co-founded a venture, called Zurich-Eye, dedicated to the commercialization of visual-inertial navigation solutions for mobile robots. In September 2016, this became Facebook-Oculus VR Switzerland.

Faculty Host: Michael Kaess


Subscribe to RI