CyLab

Studies of Internet censorship rely on an experimental technique called probing.  From a client within each country under investigation, the experimenter attempts to access network resources that are suspected to be censored, and records what happens.  The set of resources to be probed is a crucial, but often neglected, element of the experimental design.

We analyze the content and longevity of 758,191 webpages drawn from 22 different probe lists, of which 15 are alleged to be actual blacklists of censored webpages in particular countries, three were compiled using a priori criteria for selecting pages with an elevated chance of being censored, and four are controls.  We find that the lists have very little overlap in terms of specific pages.  Mechanically assigning a topic to each page, however, reveals common themes, and suggests that hand-curated probe lists may be neglecting certain frequently-censored topics.  We also find that pages on controversial topics tend to have much shorter lifetimes than pages on uncontroversial topics.  Hence, probe lists need to be continuously updated to be useful.

To carry out this analysis, we have developed automated infrastructure for collecting snapshots of webpages, weeding out irrelevant material (e.g. site “boilerplate” and parked domains), translating text, assigning topics, and detecting topic changes.  The system scales to hundreds of thousands of pages collected.

Zachary Weinberg was born in Los Angeles, CA in 1978, but escaped at the earliest opportunity.  He has spent the years since doing, variously, chemistry, C compiler maintenance, cognitive linguistics, distributed version control system development, Web browser development, and Web security, before a fateful internship at SRI in 2012 put him onto censorship circumvention and measurement.

This is a practice talk for PETS 2017

Failure to sufficiently identify computer security threats leads to missing security requirements and poor architectural decisions, resulting in vulnerabilities in cyber and cyber-physical systems.

Our prior research study evaluated three exemplar Threat Modeling Methods, designed on different principles, in order to understand strengths and weaknesses of each method. Our goal is to produce a set of tested principles which can help programs select the most appropriate TMMs. This will result in improved confidence in the cyber threats identified, accompanied by evidence of the conditions under which each technique is most effective. This presentation will describe the study, its results, and future plans.

Nancy R. Mead is a Fellow and Principal Researcher at the Software Engineering Institute (SEI). Mead is an Adjunct Professor of Software Engineering at Carnegie Mellon University. She is currently involved in the study of security requirements engineering and the development of software assurance curricula. She also served as director of software engineering education for the SEI from 1991 to 1994. Her research interests are in the areas of software security, software requirements engineering, and software architectures.

Prior to joining the SEI, Mead was a senior technical staff member at IBM Federal Systems, where she spent most of her career in the development and management of large real-time systems. She also worked in IBM's software engineering technology area and managed IBM Federal Systems' software engineering education department. She has developed and taught numerous courses on software engineering topics, both at universities and in professional education courses.

Mead authored more than 150 publications and invited presentations. She is a Fellow of the Institute of Electrical and Electronic Engineers, Inc. (IEEE) and the IEEE Computer Society, and is a Distinguished Educator of the Association of Computing Machinery. She received the 2015 Distinguished Education Award from the IEEE Computer Society Technical Council on Software Engineering. The Nancy Mead Award for Excellence in Software Engineering Education is named for her and has been awarded since 2010, with Mary Shaw as the first recipient. Dr. Mead earned her PhD in mathematics from the Polytechnic Institute of New York BA and an MS in mathematics from New York University.

Self-driving vehicle technologies are expected to play a significant role in the future of transportation. One of the main challenges on public roads is the safe cooperation and collaboration among multiple vehicles using inter-vehicle communications. In particular, road intersections are serious bottlenecks of urban transportation, as more than 44% of all reported crashes for human-driven vehicles occur within intersection areas. At merge point, which corresponds to an intersection type, serious traffic accidents often occur, and collaboration and cooperation is critical in the area. In this work, we present a safe traffic protocol for use at merge points, where vehicles arriving from two lanes with different priorities must interleave to form a single lane of traffic. Simulation results show that our cooperative protocol has higher traffic throughput, compared to simple traffic protocols, while ensuring safety.

Shunsuke Aoki is a 2nd year Ph.D. student in the Department of Electrical and Computer Engineering advised by Professor Ragunathan (Raj) Rajkumar. His current research interests include Vehicular Communications and Cyber-Physical Systems. Prior to joining CMU in 2015, he worked as a Research Fellow with the Social Computing Group at Microsoft Research Asia. He received his B. Eng. from Waseda University in 2012, and MSc from The University of Tokyo in 2014.

This is a practice for the ECE Qualification Exam.

As espionage becomes more prominent in cyberspace, a nascent industry has been born to investigate and mitigate cyberespionage campaigns. Financial incentives have established a structure for this industry that runs counter to the rules of the great game, by naming and shaming countries in their most sensitive operations. As these companies move their work under the cover of NDAs to avoid inflaming political sensitivities, who will rise to solve the validation crisis and keep threat intelligence producers honest? This talk will discuss the evolution of the threat intelligence production space and the role that academia can play within it.

Juan Andrés Guerrero joined GReAT in 2014 to focus on targeted attacks. Before joining Kaspersky, he worked as Senior Cybersecurity and National Security Advisor to the President of Ecuador. Juan Andrés comes from a background of specialized research in Philosophical Logic. His latest publications include 'The Ethics and Perils of APT Research: An Unexpected Transition Into Intelligence Brokerage' and ‘Wave Your False Flags: Deception Tactics Muddying Attribution in Targeted Attacks’.

Current network deployments use specialized, standalone appliances or “middleboxes” to execute a variety of Network Functions (NFs) and policies. In this context, Network Functions Virtualization (NFV) is a new networking concept that promises to bring about a paradigm shift in middlebox implementation by replacing traditional middleboxes with software implementations of NFs running on commodity hardware. In this work, we present and discuss the recent technological advances that facilitate the move towards NFV and software-based packet processing. We specifically focus on the challenge of achieving predictable network performance in the context of NFV and discuss the limitations of previous research. We then introduce a new metric, namely the Cache Access Rate of a NF, to enable more accurate performance prediction of NFs and briefly discuss our plans for future work.

Antonis Manousis is a 2nd year PhD student in the department of Electrical and Computer Engineering advised by Professor Vyas Sekar. His current research interests include computer networks with a focus on performance of network middleboxes. Antonis received his undergraduate degree from the National Technical University of Athens in 2015.

Practice talk.

In modern CMOS-based processors, static leakage power dominates the total power consumption. This phenomenon necessitates the use of sleep states to save energy. In this paper, we discuss the use of partitioned fixed-priority energy-saving schedulers, which utilize a processor's state to save energy. The techniques presented rely on an enhanced version of Energy-Saving Rate-Harmonized Scheduling (ES-RHS), and Energy-Saving Rate-Monotonic Scheduling (ES-RMS) to maximize the time the processor spends in the lowest-power deep-sleep state. In some modern multi-core processors, all cores need to transition synchronously into deep sleep. For this class of processors, we present a partitioning technique called task information, to maximize the synchronous deep-sleep duration across all processing cores. We also illustrate the benefits of using ES-RMS over ES-RHS for this class of processors. For processors which allow cores to individually transition into deep sleep, we show that, while utilizing ES-RHS on each core, any feasible partition can optimally utilize all of the processor's idle durations to put it into deep sleep. Experimental evaluations show that the proposed techniques can provide significant energy savings.

Sandeep D'souza is a 2nd year Ph.D. student in the Department of Electrical and Computer Engineering, and works with Professor Ragunathan (Raj) Rajkumar. His current research interests include real-time systems theory and designing scalable distributed Cyber-Physical Systems. He received his B.Tech in Electronics and Electrical Communication Engineering from the Indian Institute of Technology Kharagpur in 2015.

In technical support scams, cybercriminals attempt to convince users that their machines are infected with malware and are in need of their technical support. In this process, the victims are asked to provide scammers with remote access to their machines, who will then "diagnose the problem", before offering their support services which typically cost hundreds of dollars. Despite their conceptual simplicity, technical support scams are responsible for yearly losses of tens of millions of dollars from everyday users of the web.

In this talk, we report on the first systematic study of technical support scams and the call centers hidden behind them. We identify malvertising as a major culprit for exposing users to technical support scams and use it to build an automated system capable of discovering, on a weekly basis, hundreds of phone numbers and domains operated by scammers. By allowing our system to run for more than 8 months we collect a large corpus of technical support scams and use it to provide insights on their prevalence, the abused infrastructure, the illicit profits, and the current evasion attempts of scammers. Finally, by setting up a controlled, IRB-approved, experiment where we interact with 60 different scammers, we experience first-hand their social engineering tactics, while collecting detailed statistics of the entire process. We explain how our findings can be used by law-enforcing agencies and propose technical and educational countermeasures for helping users avoid being victimized by technical support scams.

Dr. Nick Nikiforakis (PhD'13) is an Assistant Professor in the Department of Computer Science at Stony Brook University. He is the director of the PragSec lab where students conduct research in all aspects of pragmatic security and privacy including web tracking, mobile security, DNS abuse, social engineering, and cyber crime. He has authored more than 40 academic papers and his work often finds its way to the popular press including TheRegister, SlashDot, BBC, and Wired. His research is supported by the National Science Foundation and the Office of Naval Research and he regularly serves in the Program Committees of all top-tier security conferences.

Researchers in cybersecurity often face two conundrums: 1) it is hard to find real-world problems that are interesting to researchers; 2) it is hard to transition cybersecurity research results into practical use. In this talk I will discuss how we overcome these two obstacles in our four-year and still on-going effort of using anthropological approach to study cybersecurity operations.

The frequent news report on breaches from well-funded organizations show a pressing need to improve security operations. However there had been very little academic research into the problem. Since most of the cyber defense tasks involve humans --- security analysts, it is natural to adopt a human-centric approach to study the problem. Unlike most of the usable security research that has flourished in the recent years, it is extremely difficult to conduct research about security analysts in the usual way such as through surveys and interviews.

Security Operation Centers (SOCs) have a culture of secrecy. It is extremely difficult for researchers to gain the trust from analysts and discover how they really do their job. As a result research that could benefit security operations are often conducted based on assumptions that do not hold in the real world.  We overcome this hurdle by adopting the well-established research method from social and cultural anthropology -- long-term participant observation. Multiple PhD and undergraduate students in computer science were trained by an anthropologist in this research method, and were embedded in SOCs of both academic institutions and corporations. By becoming one of the subjects we want to study, we perform reflection and reconstruction to gain the "native point of view" of security analysts. Through four years (and still on-going) fieldwork in two academic and three corporate SOCs, we collected large amounts of data in the form of field notes.

After systematically analyzing the data using qualitative methods widely used in social science research, such as grounded theory and template analysis, we uncovered major findings that explain the burnout phenomena in the SOCs. We further found that the Activity Theory framework formulated by Engestroem provides a deep explanation of the many conflicts we found in an SOC environment that cause inefficiency, and offers insights into how to turn those contradictions into opportunities for innovation to improve operational efficiency.

Finally, in the most recent SOC fieldwork, we were able to achieve our initial goal of conducting this anthropological research -- designing effective technologies for security operations that were taken up by the analysts and improved their work efficiency.

About the Speaker.

The Domain Name System (DNS) is a critical component of the Internet. The critical nature of DNS often makes it the target of direct cyber-attacks and other forms of abuse. Cyber-criminals rely heavily upon the reliability and scalability of the DNS protocol to serve as an agile platform for their illicit network operations. For example, modern malware and Internet fraud techniques rely upon the DNS to locate their remote command-and-control (C&C) servers through which new commands from the attacker are issued, serve as exfiltration points for the information stolen from the victim's computer and to manage subsequent updates to their malicious toolset.

In this talk I will discuss how we can reason about Internet abuse using DNS. First I will argue why the algorithmic quantification of DNS reputation and trust is fundamental for understanding the security of our Internet communications. Then, I will examine how DNS traffic relates to malware communications. Among other things, we will reason about data-driven methods that can be used to reliably detect malware communications that employ Domain Name Generation Algorithms (DGAs) --- even in the complete absence of the malware sample. Finally, I will conclude my talk by proving a five year overview of malware network communications. Through this study we will see that (as network security researchers and practitioners) we are still approaching the very simple detection problems fundamentally in the wrong way.

Dr. Manos Antonakakis (PhD’12) is an Assistant Professor in the School of Electrical and Computer Engineering (ECE), and adjunct faculty in the College of Computing (CoC), at the Georgia Institute of Technology. He is responsible for the Astrolavos Lab, where students conduct research in the areas of Attack Attribution, Network Security and Privacy, Intrusion Detection, and Data Mining. In May 2012, he received his Ph.D. in Computer Science from the Georgia Institute of Technology. Before joining the Georgia Tech ECE faculty ranks, Dr. Antonakakis held the Chief Scientist role at Damballa, where he was responsible for advanced research projects, university collaborations, and technology transfer efforts. He currently serves as the co-chair of the Academic Committee for the Messaging Anti-Abuse Working Group (MAAWG). Since he joined Georgia Tech in 2014, Dr. Antonakakis raised more than $20M in research funding as Primary Investigator from government agencies and the private sector. Dr. Antonakakis is the author of several U.S. patents and academic publications in top academic conferences.

Despite soaring investments in IT infrastructure, the state of operational network security continues to be abysmal. We argue that this is because existing enterprise security approaches fundamentally lack precision in one or more dimensions: (1) isolation to ensure that the enforcement mechanism does not induce interference across different principals; (2) context to customize policies for different devices; and (3) agility to rapidly change the security posture in response to events. To address these shortcomings, we present PSI, a new enterprise network security architecture that addresses these pain points. PSI enables fine-grained and dynamic security postures for different network devices. These are implemented in isolated enclaves and thus provides precise instrumentation on these above dimensions by construction. To this end, PSI leverages recent advances in software-defined networking (SDN) and network functions virtualization (NFV). We design expressive policy abstractions and scalable orchestration mechanisms to implement the security postures. We implement PSI using an industry-grade SDN controller (OpenDaylight) and integrate several commonly used enforcement tools (e.g., Snort, Bro, Squid). We show that PSI is scalable and is an enabler for new detection and prevention capabilities that would be difficult to realize with existing solutions.

This is a practice talk for Tianlong’s talk at NDSS 2016.

Tianlong Yu is a third year PhD student in CyLab, advised by Prof. Vyas Sekar and Prof. Srini Seshan. His research focuses on extending Software Defined Networking (SDN) and Network Function Virtualization (NFV) to provide customized, dynamic and isolated policy enforcement for critical assets in network. More broadly, his research covers enterprise network security and Internet-of-Things network security.

Pages

Subscribe to CyLab