go back to the main page

 SSS Abstracts 
Spring 2013

go back to the main page

Testing in the World of Plug-In Based Systems

Friday, March 1st, 2013 from 12-1 pm in GHC 4303.

Presented by Shivanshu Singh, ISR

Plug In based systems are quite mainstream these days and form some of the most popular software systems used everyday. Eclipse, Android, Wordpress and Firefox are some examples. With so many plugins, numerous vendors, a big range of versions of these plugins and combinations thereof that can possibly exist, its easy to imagine all sorts of problems this might cause: Problems of testing the various combinations and checking dependencies among the various plugins and their versions, to list a few. In this presentation, we take a look at the concerns that exist when it comes to testing plugin based systems, especially in the context of the Eclipse ecosystem and what can be done about these issues with possible directions in which work can be done to ease some of the problems that exist.


C-to-CoRAM: Compiling Perfect Loop Nests to the Portable CoRAM

Tuesday, April 9th, 2013 from 12-1 pm in GHC 6501.

Presented by Gabriel Weisz, CSD

Modern FPGAs contain abundant resources that suggest a potential for being first class computation devices. However, they are rarely used for computation due to the difficulty involved in using them. I will present a system that compiles perfect loop nests, common in scientific computing applications, to optimized FPGA implementations. I'll discuss how we create parallel designs from sequential code, optimize memory accesses to reduce DRAM bandwidth requirements, and produce compiled designs for either Altera or Xilinx boards.

Joint work with James Hoe. Presented in partial fulfillment of the speaking requirement.


"Why is DRAM so slow?"

Tuesday, April 16th, 2013 from 12-1 pm in GHC 6501.

Presented by Vivek Seshadri, CSD

Most modern systems typically employ main memory based on DRAM technology. This is because DRAM presents a favorable trade-off between cost-per-bit and performance. However, the access latency of commodity DRAM continues to be a bottleneck for system performance. The aim of this talk is to enable the audience to appreciate the problems faced by existing DRAM designs and the solutions proposed by our research. This talk is divided into three parts. In the first part, I will describe the internal organization of DRAM-based memory in detail. Based on this understanding, in the second part, I will present some problems faced by today's DRAM design, including the high access latency of commodity DRAM. Finally, in the third part, I will briefly summarize the research that we have been carrying out in our group to address these problems.

Presented in Partial Fulfillment of the Speaking Requirement


Linear Logical Voting Protocols

Tuesday, April 23rd, 2013 from 12-1 pm in GHC 6501.

Presented by Henry DeYoung, CSD

Current approaches to electronic implementations of voting protocols involve translating legal text to source code of an imperative programming language. Because the gap between legal text and source code is very large, it is difficult to trust that the program meets its legal specification. In response, we promote linear logic as a high-level lannguage for both specifying and implementing voting protocols. Our linear logical specifications of the single-winner first-past-the-post (also known as winner-take-all) and single transferable vote (STV) protocols demonstrate that this approach leads to concise implementations that closely correspond to their legal specification, thereby increasing trust.

Joint work with Carsten Schuermann (IT University of Copenhagen).

In partial fulfillment of the CSD speaking requirement.


Developing a Predictive Model of Quality of Experience for Internet Video

Friday, April 26th, 2013 from 12-1 pm in GHC 4303.

Presented by Athula Balachandran, CSD

Improving users' quality of experience (QoE) is crucial to sustain the advertisement and subscription based revenue models that enable the growth of Internet video. However, traditional techniques to measure video quality (e.g. Peak Signal to Noise Ratio) and user experience (e.g., opinion scores) have significant limitations and they are replaced by new video quality metrics (e.g., rate of buffering, average bitrate) and engagement-centric measures of user experience (e.g., viewing time, customer return rate). In this talk, I will present the challenges in developing an engagement-centric QoE model for Internet video and will discuss a systematic data-driven approach to tackle these challenges. I will also briefly illustrate how different players in the Internet video eco-system (content providers, content delivery networks etc.) can use this video QoE model to improve overall user engagement and hence their revenue.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.


GraphChi: Large-Scale Graph Computation on Just a PC

Tuesday, April 30th, 2013 from 12-1 pm in GHC 6501.

Presented by Aapo Kyrola, CSD

Current systems for graph computation require a distributed computing cluster to handle very large real-world problems, such as analysis on social networks or the web graph. While distributed computation resources have become more accessible, developing distributed graph algorithms still remains challenging, especially to non-experts.

In this work, we present GraphChi, a disk-based system for computing efficiently on graphs with billions of edges. By using a well-known method to break large graphs into small parts, and a novel parallel sliding windows method, GraphChi is able to execute several advanced data mining, graph mining, and machine learning algorithms on very large graphs, using just a single consumer-level computer. We further extend GraphChi to support graphs that evolve over time, and demonstrate that on a single computer, GraphChi can process over a hundred of thousands of graph updates per second, while simultaneously performing computation. We show by experiments and theoretical analysis, that GraphChi performs well on SSDs and surprisingly also on rotational hard drives.

By repeating experiments reported for existing distributed systems, we show that with only fraction of the resources, GraphChi can solve the same problems in very reasonable time. This work was presented in OSDI '12.

Presented in Partial Fulfillment of the Speaking Requirement


Natural Language Understanding Using Knowledge and Construction Grammars

Tuesday, May 7th, 2013 from 12-1 pm in GHC 6501.

Presented by Fatima Al-Raisi, LTI

This talk will present an approach to natural language understanding (NLU) using knowledge and construction grammars. The aim of the system is to extract "meaning" from unstructured text by recognizing surface forms that match predefined "constructions". These constructions are defined as pairings of "surface form" and "underlying meaning". Based on this matching, a semantic representation of the input is created and relevant knowledge is extracted. The knowledge base also plays a role in the recognition process. The talk will introduce NLU using knowledge and construction grammars and briefly discuss possible applications and two other related problems of learning and evaluation


Unifying Guilt-by-Association Approaches: Theorems and Fast Algorithms

Friday, May 10th, 2013 from 12-1 pm in GHC 7501.

Presented by Danai Koutra, CSD

If several friends of Smith have committed petty thefts, what would you say about Smith? Most people would not be surprised if Smith is a hardened criminal. Guilt-by-association methods combine weak signals to derive stronger ones, and have been extensively used for anomaly detection and classification in numerous settings (e.g., accounting fraud, cyber-security, calling-card fraud).

The focus of this work is to compare and contrast several very successful, guilt-by-association methods: Random Walk with Restarts, Semi-Supervised Learning, and Belief Propagation (BP).

Our main contributions are two-fold: (a) theoretically, we prove that all the methods result in a similar matrix inversion problem; (b) for practical applications, we developed FaBP, a fast algorithm that yields 2x speedup, equal or higher accuracy than BP, and is guaranteed to converge. We demonstrate these benefits using synthetic and real datasets, including YahooWeb, one of the largest graphs ever studied with BP.

Collaborators: Tai-You Ke, U Kang, Duen Horng (Polo) Chau, Hsing-Kuo Kenneth Pao, Christos Faloutsos

Presented in Partial Fulfillment of the Speaking Requirement


Web contact: sss+www@cs