Failure to sufficiently identify computer security threats leads to missing security requirements and poor architectural decisions, resulting in vulnerabilities in cyber and cyber-physical systems.

Our prior research study evaluated three exemplar Threat Modeling Methods, designed on different principles, in order to understand strengths and weaknesses of each method. Our goal is to produce a set of tested principles which can help programs select the most appropriate TMMs. This will result in improved confidence in the cyber threats identified, accompanied by evidence of the conditions under which each technique is most effective. This presentation will describe the study, its results, and future plans.

Nancy R. Mead is a Fellow and Principal Researcher at the Software Engineering Institute (SEI). Mead is an Adjunct Professor of Software Engineering at Carnegie Mellon University. She is currently involved in the study of security requirements engineering and the development of software assurance curricula. She also served as director of software engineering education for the SEI from 1991 to 1994. Her research interests are in the areas of software security, software requirements engineering, and software architectures.

Prior to joining the SEI, Mead was a senior technical staff member at IBM Federal Systems, where she spent most of her career in the development and management of large real-time systems. She also worked in IBM's software engineering technology area and managed IBM Federal Systems' software engineering education department. She has developed and taught numerous courses on software engineering topics, both at universities and in professional education courses.

Mead authored more than 150 publications and invited presentations. She is a Fellow of the Institute of Electrical and Electronic Engineers, Inc. (IEEE) and the IEEE Computer Society, and is a Distinguished Educator of the Association of Computing Machinery. She received the 2015 Distinguished Education Award from the IEEE Computer Society Technical Council on Software Engineering. The Nancy Mead Award for Excellence in Software Engineering Education is named for her and has been awarded since 2010, with Mary Shaw as the first recipient. Dr. Mead earned her PhD in mathematics from the Polytechnic Institute of New York BA and an MS in mathematics from New York University.

Self-driving vehicle technologies are expected to play a significant role in the future of transportation. One of the main challenges on public roads is the safe cooperation and collaboration among multiple vehicles using inter-vehicle communications. In particular, road intersections are serious bottlenecks of urban transportation, as more than 44% of all reported crashes for human-driven vehicles occur within intersection areas. At merge point, which corresponds to an intersection type, serious traffic accidents often occur, and collaboration and cooperation is critical in the area. In this work, we present a safe traffic protocol for use at merge points, where vehicles arriving from two lanes with different priorities must interleave to form a single lane of traffic. Simulation results show that our cooperative protocol has higher traffic throughput, compared to simple traffic protocols, while ensuring safety.

Shunsuke Aoki is a 2nd year Ph.D. student in the Department of Electrical and Computer Engineering advised by Professor Ragunathan (Raj) Rajkumar. His current research interests include Vehicular Communications and Cyber-Physical Systems. Prior to joining CMU in 2015, he worked as a Research Fellow with the Social Computing Group at Microsoft Research Asia. He received his B. Eng. from Waseda University in 2012, and MSc from The University of Tokyo in 2014.

This is a practice for the ECE Qualification Exam.

As espionage becomes more prominent in cyberspace, a nascent industry has been born to investigate and mitigate cyberespionage campaigns. Financial incentives have established a structure for this industry that runs counter to the rules of the great game, by naming and shaming countries in their most sensitive operations. As these companies move their work under the cover of NDAs to avoid inflaming political sensitivities, who will rise to solve the validation crisis and keep threat intelligence producers honest? This talk will discuss the evolution of the threat intelligence production space and the role that academia can play within it.

Juan Andrés Guerrero joined GReAT in 2014 to focus on targeted attacks. Before joining Kaspersky, he worked as Senior Cybersecurity and National Security Advisor to the President of Ecuador. Juan Andrés comes from a background of specialized research in Philosophical Logic. His latest publications include 'The Ethics and Perils of APT Research: An Unexpected Transition Into Intelligence Brokerage' and ‘Wave Your False Flags: Deception Tactics Muddying Attribution in Targeted Attacks’.

Current network deployments use specialized, standalone appliances or “middleboxes” to execute a variety of Network Functions (NFs) and policies. In this context, Network Functions Virtualization (NFV) is a new networking concept that promises to bring about a paradigm shift in middlebox implementation by replacing traditional middleboxes with software implementations of NFs running on commodity hardware. In this work, we present and discuss the recent technological advances that facilitate the move towards NFV and software-based packet processing. We specifically focus on the challenge of achieving predictable network performance in the context of NFV and discuss the limitations of previous research. We then introduce a new metric, namely the Cache Access Rate of a NF, to enable more accurate performance prediction of NFs and briefly discuss our plans for future work.

Antonis Manousis is a 2nd year PhD student in the department of Electrical and Computer Engineering advised by Professor Vyas Sekar. His current research interests include computer networks with a focus on performance of network middleboxes. Antonis received his undergraduate degree from the National Technical University of Athens in 2015.

Practice talk.

In modern CMOS-based processors, static leakage power dominates the total power consumption. This phenomenon necessitates the use of sleep states to save energy. In this paper, we discuss the use of partitioned fixed-priority energy-saving schedulers, which utilize a processor's state to save energy. The techniques presented rely on an enhanced version of Energy-Saving Rate-Harmonized Scheduling (ES-RHS), and Energy-Saving Rate-Monotonic Scheduling (ES-RMS) to maximize the time the processor spends in the lowest-power deep-sleep state. In some modern multi-core processors, all cores need to transition synchronously into deep sleep. For this class of processors, we present a partitioning technique called task information, to maximize the synchronous deep-sleep duration across all processing cores. We also illustrate the benefits of using ES-RMS over ES-RHS for this class of processors. For processors which allow cores to individually transition into deep sleep, we show that, while utilizing ES-RHS on each core, any feasible partition can optimally utilize all of the processor's idle durations to put it into deep sleep. Experimental evaluations show that the proposed techniques can provide significant energy savings.

Sandeep D'souza is a 2nd year Ph.D. student in the Department of Electrical and Computer Engineering, and works with Professor Ragunathan (Raj) Rajkumar. His current research interests include real-time systems theory and designing scalable distributed Cyber-Physical Systems. He received his B.Tech in Electronics and Electrical Communication Engineering from the Indian Institute of Technology Kharagpur in 2015.

In technical support scams, cybercriminals attempt to convince users that their machines are infected with malware and are in need of their technical support. In this process, the victims are asked to provide scammers with remote access to their machines, who will then "diagnose the problem", before offering their support services which typically cost hundreds of dollars. Despite their conceptual simplicity, technical support scams are responsible for yearly losses of tens of millions of dollars from everyday users of the web.

In this talk, we report on the first systematic study of technical support scams and the call centers hidden behind them. We identify malvertising as a major culprit for exposing users to technical support scams and use it to build an automated system capable of discovering, on a weekly basis, hundreds of phone numbers and domains operated by scammers. By allowing our system to run for more than 8 months we collect a large corpus of technical support scams and use it to provide insights on their prevalence, the abused infrastructure, the illicit profits, and the current evasion attempts of scammers. Finally, by setting up a controlled, IRB-approved, experiment where we interact with 60 different scammers, we experience first-hand their social engineering tactics, while collecting detailed statistics of the entire process. We explain how our findings can be used by law-enforcing agencies and propose technical and educational countermeasures for helping users avoid being victimized by technical support scams.

Dr. Nick Nikiforakis (PhD'13) is an Assistant Professor in the Department of Computer Science at Stony Brook University. He is the director of the PragSec lab where students conduct research in all aspects of pragmatic security and privacy including web tracking, mobile security, DNS abuse, social engineering, and cyber crime. He has authored more than 40 academic papers and his work often finds its way to the popular press including TheRegister, SlashDot, BBC, and Wired. His research is supported by the National Science Foundation and the Office of Naval Research and he regularly serves in the Program Committees of all top-tier security conferences.

Researchers in cybersecurity often face two conundrums: 1) it is hard to find real-world problems that are interesting to researchers; 2) it is hard to transition cybersecurity research results into practical use. In this talk I will discuss how we overcome these two obstacles in our four-year and still on-going effort of using anthropological approach to study cybersecurity operations.

The frequent news report on breaches from well-funded organizations show a pressing need to improve security operations. However there had been very little academic research into the problem. Since most of the cyber defense tasks involve humans --- security analysts, it is natural to adopt a human-centric approach to study the problem. Unlike most of the usable security research that has flourished in the recent years, it is extremely difficult to conduct research about security analysts in the usual way such as through surveys and interviews.

Security Operation Centers (SOCs) have a culture of secrecy. It is extremely difficult for researchers to gain the trust from analysts and discover how they really do their job. As a result research that could benefit security operations are often conducted based on assumptions that do not hold in the real world.  We overcome this hurdle by adopting the well-established research method from social and cultural anthropology -- long-term participant observation. Multiple PhD and undergraduate students in computer science were trained by an anthropologist in this research method, and were embedded in SOCs of both academic institutions and corporations. By becoming one of the subjects we want to study, we perform reflection and reconstruction to gain the "native point of view" of security analysts. Through four years (and still on-going) fieldwork in two academic and three corporate SOCs, we collected large amounts of data in the form of field notes.

After systematically analyzing the data using qualitative methods widely used in social science research, such as grounded theory and template analysis, we uncovered major findings that explain the burnout phenomena in the SOCs. We further found that the Activity Theory framework formulated by Engestroem provides a deep explanation of the many conflicts we found in an SOC environment that cause inefficiency, and offers insights into how to turn those contradictions into opportunities for innovation to improve operational efficiency.

Finally, in the most recent SOC fieldwork, we were able to achieve our initial goal of conducting this anthropological research -- designing effective technologies for security operations that were taken up by the analysts and improved their work efficiency.

About the Speaker.

The Domain Name System (DNS) is a critical component of the Internet. The critical nature of DNS often makes it the target of direct cyber-attacks and other forms of abuse. Cyber-criminals rely heavily upon the reliability and scalability of the DNS protocol to serve as an agile platform for their illicit network operations. For example, modern malware and Internet fraud techniques rely upon the DNS to locate their remote command-and-control (C&C) servers through which new commands from the attacker are issued, serve as exfiltration points for the information stolen from the victim's computer and to manage subsequent updates to their malicious toolset.

In this talk I will discuss how we can reason about Internet abuse using DNS. First I will argue why the algorithmic quantification of DNS reputation and trust is fundamental for understanding the security of our Internet communications. Then, I will examine how DNS traffic relates to malware communications. Among other things, we will reason about data-driven methods that can be used to reliably detect malware communications that employ Domain Name Generation Algorithms (DGAs) --- even in the complete absence of the malware sample. Finally, I will conclude my talk by proving a five year overview of malware network communications. Through this study we will see that (as network security researchers and practitioners) we are still approaching the very simple detection problems fundamentally in the wrong way.

Dr. Manos Antonakakis (PhD’12) is an Assistant Professor in the School of Electrical and Computer Engineering (ECE), and adjunct faculty in the College of Computing (CoC), at the Georgia Institute of Technology. He is responsible for the Astrolavos Lab, where students conduct research in the areas of Attack Attribution, Network Security and Privacy, Intrusion Detection, and Data Mining. In May 2012, he received his Ph.D. in Computer Science from the Georgia Institute of Technology. Before joining the Georgia Tech ECE faculty ranks, Dr. Antonakakis held the Chief Scientist role at Damballa, where he was responsible for advanced research projects, university collaborations, and technology transfer efforts. He currently serves as the co-chair of the Academic Committee for the Messaging Anti-Abuse Working Group (MAAWG). Since he joined Georgia Tech in 2014, Dr. Antonakakis raised more than $20M in research funding as Primary Investigator from government agencies and the private sector. Dr. Antonakakis is the author of several U.S. patents and academic publications in top academic conferences.

Despite soaring investments in IT infrastructure, the state of operational network security continues to be abysmal. We argue that this is because existing enterprise security approaches fundamentally lack precision in one or more dimensions: (1) isolation to ensure that the enforcement mechanism does not induce interference across different principals; (2) context to customize policies for different devices; and (3) agility to rapidly change the security posture in response to events. To address these shortcomings, we present PSI, a new enterprise network security architecture that addresses these pain points. PSI enables fine-grained and dynamic security postures for different network devices. These are implemented in isolated enclaves and thus provides precise instrumentation on these above dimensions by construction. To this end, PSI leverages recent advances in software-defined networking (SDN) and network functions virtualization (NFV). We design expressive policy abstractions and scalable orchestration mechanisms to implement the security postures. We implement PSI using an industry-grade SDN controller (OpenDaylight) and integrate several commonly used enforcement tools (e.g., Snort, Bro, Squid). We show that PSI is scalable and is an enabler for new detection and prevention capabilities that would be difficult to realize with existing solutions.

This is a practice talk for Tianlong’s talk at NDSS 2016.

Tianlong Yu is a third year PhD student in CyLab, advised by Prof. Vyas Sekar and Prof. Srini Seshan. His research focuses on extending Software Defined Networking (SDN) and Network Function Virtualization (NFV) to provide customized, dynamic and isolated policy enforcement for critical assets in network. More broadly, his research covers enterprise network security and Internet-of-Things network security.

In today’s data-centric economy issues of privacy are becoming increasingly complex to manage. This is true for users who are often feeling helpless when it comes to understanding and managing the many different ways in which their data can be collected and used. But it is also true for developers, service providers, app store operators and regulators.  A significant source of frustration has been the lack of progress in formalizing the disclosure of data collection and use practices. These disclosures today continue to primarily take the form of long privacy policies, which very few people actually read.

What if computers could actually understand the text of privacy policies? In this talk, I will report on our progress developing techniques to do just that and will discuss the development and piloting of tools that build on these technologies. This includes an overview of a compliance tool for mobile apps. The tool automatically analyzes the code of apps and compares its findings with disclosures made in the text of privacy policies to identify potential compliance violations. I will report on a study of about 18,000 Android apps. Results of the study suggest that compliance issues are widespread.

In the second part of this talk, I will discuss how using machine learning we can also build models of people’s privacy preferences and help them manage their privacy settings. This will include an overview of our work on Personalized Privacy Assistants. These assistants are intended to selectively notify their users about data collection and use practices they may find egregious and are also capable of helping their users configure available privacy settings. We will review results of a pilot involving one such assistant developed to help users manage their mobile app permissions. I will conclude with a discussion of ongoing work to extend this functionality in the context of Internet of Things scenarios.

Norman M. Sadeh is a Professor in the School of Computer Science at Carnegie Mellon University (CMU) and a faculty at CyLab. He is director of CMU’s Mobile Commerce Laboratory and co-Director of the MSIT Program in Privacy Engineering. He also co-founded the School of Computer Science ’s PhD Program in Societal Computing (formerly  Computation, Organizations and Society). His primary research interests are in the area of mobile  computing, the Internet of Things, cybersecurity, online privacy, user-oriented machine learning, human computer interaction and artificial intelligence. His research has been credited with influencing the design and development of a number of commercial products well as activities at the US Federal Trade Commission the California Office of the Attorney General.

Between 2008 and 2011, Norman served as founding CEO of Wombat Security Technologies, a leading provider of SaaS cybersecurity training products and anti-phishing solutions originally developed as part of research with several of his colleagues at CMU. As chairman of the board and chief scientist, Norman remains actively involved in the company, working closely with the management team on both business and technology strategies. Among other activities, Norman currently leads two of the largest domestic research projects in privacy, an NSF SaTC Frontier project on Usable Privacy Policies and a project on Personalized Privacy Assistants funded by the DARPA Brandeis initiative, the National Science Foundation and Google’s IoT Expedition.

In the late nineties, Norman was program manager with the European Commission’s ESPRIT research program, prior to serving for two years as Chief Scientist of its US $600M (EUR 550M) initiative in New Methods of Work and eCommerce within the Information Society Technologies (IST) program. As such, he was responsible for shaping European research priorities in collaboration with industry and universities across Europe. These activities eventually resulted in the launch of over 200 R&D projects involving over 1,000 European organizations from industry and research. While at the Commission, Norman also contributed to a number of EU policy initiatives related to eCommerce, the Internet, cybersecurity, privacy and entrepreneurship.


Subscribe to CyLab