go back to the main page

 SSS Abstracts 
Fall 2013

go back to the main page

A Reliability Analysis Technique for Estimating Sequentially Coordinated Multirobot Mission Performance

Tuesday, November 19th, 2013 from 12-1 pm in GHC 6501.

Presented by John F. Porter, RI

Presented is a quantifiable method by which the behaviors of robots, as determined by their performance in a cyber-physical context, can be captured and generalized so that accurate predictions of sequentially coordinated multirobot behaviors can be made. The analysis technique abstracts sequentially coordinated multirobot missions as a frequentist inference problem. Rather than attempt to identify and put into a causal relation all the hidden and unknown cyber-physical influences that can have an impact on mission performance, we model the problem as that of predicting multirobot performance as a conditional probability. This allows us to initially limit the testing and evaluation of robot performance to evaluations of atomistic behaviors, and to experiment mathematically with the combinations of predictive features and elementary performance metrics to derive accurate predictions of higher level coordinated performance.

Collaborators: Joseph Giampapa, John Dolan, Kawa Cheung.

Structured Models for Videos

Tuesday, December 3rd, 2013 from 12-1 pm in GHC 6501.

Presented by Ekaterina Taralova, CSD

Bag of Words is a popular and successful framework for the task of activity classification in videos. In BoW we extract features, cluster them to learn a codebook of words, and then quantize each video by pooling the features. We address limitations of two fundamental aspects of this framework. First, we add structure to the clustering step to enable generalization across different execution styles. Second, we provide a method for pooling features in a structured way. In prior work, this pooling is done over pre-determined rigid cuboids. It is natural to consider pooling features over a video segmentation, but this produces a video representation of variable size. We propose a fixed size representation, Motion Words, where we pool features over supervoxels. To segment the video into supervoxels we propose a superpixel-based method, Globally Consistent Supervoxels, designed to preserve motion boundaries over the entire video. Evaluation on classification and retrieval tasks on two datasets shows that Motion Words achieves state-of-the-art performance. In addition to providing a more flexible support for capturing actions, the proposed method enables interpretation of the results, i.e. it provides understanding of why two videos are similar.

This is joint work with Fernando De la Torre and Martial Hebert

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.

Safely-Composable Type-Specific Languages

Friday, December 6th, 2013 from 12-1 pm in GHC 4303.

Presented by Cyrus Omar, CSD

Domain-specific languages can improve ease-of-use, expressiveness and verifiability, but defining and using different DSLs within a single application remains difficult.

We introduce an approach for embedding DSLs in a common host language where the type of a piece of domain-specific code can specify which grammar governs it. Because this grammar is type-specific, but the block is delimited by the host language, we can guarantee that link-time conflicts cannot arise. These grammars can recursively include top-level expressions using special entry tokens that guarantee that the composition of the type-specific language and the host language is also sound. We argue that this approach occupies a previously-unexplored sweet spot providing high expressiveness and ease-of-use while guaranteeing safety. We introduce the design, provide examples, sketch the safety theorems and describe an ongoing implementation of this strategy in the Wyvern programming language.

This is joint work with Jonathan Aldrich, Darya Kurilova, Benjamin Chung, Ligia Nistor and Alex Potanin

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.

Detecting associations between genetic variants and output traits using prior biological knowledge

Tuesday, December 10th, 2013 from 12-1 pm in GHC 6501.

Presented by Seunghak Lee, CSD

One of the fundamental problems in computational biology is to detect genetic variants associated with output traits such as disease status, heights, or gene expressions. However, detecting trait-associated genetic variants has been a challenging problem because in practice, we do not have enough statistical power to detect them reliably; we usually have a small number of samples with a large number of genetic variants.

In this work, we present a novel method that uses prior biological knowledge to boost the statistical power of detecting genetic variants associated with traits. Specifically, we use biological knowledge about groups of correlated genetic variants (e.g. genetic variants in linkage disequilibrium) and groups of correlated traits (e.g. co-expressed genes). Given the grouping information, we assume that a group of correlated traits may be affected by the common genetic variants, or a group of genetic variants may affect the common traits. Under such assumptions, we incorporate the biological knowledge into a sparse regression model using L1/L2 penalties. We illustrate our approach with examples, and show how prior biological knowledge helps increase the power to detect associations between genetic variants and traits.

This is joint work with Eric Xing.

This will be presented in Partial Fulfillment of the CSD Speaking Skills Requirement.

Web contact: sss+www@cs