WebAssembly is a promising new technology that aims to bring performant, safe, multi-language features to browsers and fundamentally shift the web away from a Javascript monoculture. The WebAssembly effort is largely driven by browser vendors (Mozilla, Google, Microsoft, et. al.) but it provides a sandboxing framework designed for use outside the browser as well. Edge compute services, such as content delivery networks, mesh networks and edge cloud services, seek efficiency in engineering tradeoffs between performance, isolation, security and safety. WebAssembly has the potential to revolutionize edge compute engineering by providing performant, secure execution of untrusted code in arbitrary applications.  In this talk Jonathan Foote will cover salient aspects of WebAssembly, sandboxing technology, and early progress in research and development towards a security-hardened, high-performance sandbox design for running untrusted code on the server.

Jonathan Foote is principal security architect at Fastly, a content delivery network (CDN) that many ubiquitous and high-profile organizations like GitHub, Pinterest, and The New York Times rely on for performance, reliability, and security of their web applications. Previously, Jonathan attacked a range application and network environments as a penetration tester, performed security research at Carnegie Mellon University SEI/CERT, and engineered secure network communication systems for Fortune 100 companies. Jonathan holds a BS in Computer Science from Penn State and an MBA from Loyola University.

Mark Twain wrote: "the Lie, as a recreation, a solace, a refuge in time of need, the fourth Grace, the tenth Muse, man's best and surest friend, is immortal, and cannot perish from the earth.” With advances of the Internet Technology, Governmental, Academic and Commercial institutions seemed to have joined forces to make Twain’s prediction all but true. We examine the nature of deception (in biology and technology) using the framework of signaling games.


  • Information Asymmetry, Signaling Games, Risks and Deception
  • Signaling Games On the Internet (& Biology)
  • Costly Signaling, Block Chains, Verifiers and Recommenders
  • Circulating Money via Signaling Games
  • Value-Storing Money via Signaling Games
  • Case Studies: Identity
  • Data Science and Cyber Security

This talk will focus on Cyber Security and their implications for Privacy, Anonymity, Data Science, Finance and Market Micro-structure.

Professor Bud Mishra is a professor of computer  science and mathematics at NYU’s Courant Institute of Mathematical Sciences, professor of engineering at NYU’s Tandon School of Engineering, professor of human genetics at MSSM (Mt. Sinai School of Medicine), visiting scholar in quantitative biology at CSHL (Cold Spring Harbor Laboratory) and a professor of cell biology at NYU SoM (School of Medicine).

Prof. Mishra has industrial experience in Computer and Data Science (aiNexusLab, ATTAP,, brainiad, Genesis Media, Pypestream, and Tartan Laboratories), Finance (Instadat, Pattern Recognition Fund and Tudor Investment), Robotics and Bio- and Nanotechnologies (Abraxis, Bioarrays, InSilico, MRTech, OpGen and Seqster). He is the author of a textbook on Algorithmic Algebra and more than two hundred archived publications. He has advised and mentored more than 37 graduate students and post-docs in the areas of computer science, robotics and control engineering, applied mathematics, finance, biology and medicine. He holds 21 issued and 23 pending patents in areas ranging over robotics, model checking, intrusion detection, cyber security, emergency response, disaster management, data analysis, biotechnology, nanotechnology, genome mapping and sequencing, mutation calling, cancer biology, fintech, adtech, edtech, internet architecture and linguistics.

Prof. Mishra’s pioneering work includes: first application of model checking to hardware verification; first robotics technologies for grasping, reactive grippers and work holding; first single molecule genotype/haplotype mapping technology (Optical Mapping); first analysis of copy number variants with a segmentation algorithm, first whole-genome haplotype assembly technology (SUTTA), first clinical-genomic variant/base calling technology (TotalRecaller), first single molecule single cell nanomapping technology, etc. Prof. Mishra’s ongoing work (in progress) continues in the areas of single-molecule nano-mapping (with Gimzewski, Reed et al.), clinical genomics (with Burzycki, Cantor, Narzisi, Reed et al.), liquid biopsies (with Jee, Nudler et al.), cancer and immunology (with Antoniotti, Bannon, Cantor, Grossman, Korsunsky, Rabadan, Ramazzotti, Zhavoronkov et al.), cyber security (with Casey, Morales, Moore, Novak et al.), cryptography (with Gvili, Janwa, Kahrobaei et al.), linguistics (with Chakraborty, Rinberg, Tamaskar, Young et al.) financial engineering (with Deboneuill, Qi, Subramaniam, et al.) and internet of the future (with Rudolph, Savas, Weill et al.).

Prof. Mishra has a degree in Science from Utkal University, in Electronics and Communication Engineering from IIT, Kharagpur, and MS and PhD degrees in Computer Science from Carnegie-Mellon University.  He is a fellow of IEEE, ACM and AAAS, a Distinguished Alumnus of IIT (Kharagpur), and a NYSTAR Distinguished Professor.

The hope that cryptography and decentralization together might ensure robust user privacy was among the strongest drivers of early success of Bitcoin's blockchain technology. A desire for privacy still permeates the growing blockchain user base today. Nevertheless, thanks to the inherent public nature of most blockchain ledgers, users' privacy is severely restricted, and a few deanonymization attacks have been reported so far. Several privacy-enhancing technologies have been proposed to solve this issues, and a few also have been implemented; however, some important challenges still remain to be resolved. In this talk, we discuss privacy challenges, promising solutions, and unresolved privacy issues with the blockchain technology. In particular, we study prominent privacy attacks on Bitcoin and other blockchain solutions, analyze the existing privacy solution, and finally describe important unresolved challenges associated with the blockchain transaction privacy.

About the Speaker


Automated techniques and tools for finding, exploiting and patching vulnerabilities are maturing. In order to achieve an end goal such as winning a cyber-battle, these techniques and tools must be wielded strategically. Currently, strategy development in cyber – even with automated tools – is done manually, and is a bottleneck in practice. In this paper, we apply game theory toward the augmentation of the human decision-making process.

Our work makes two novel contributions. First, previous work is limited by strong assumptions regarding the number of actors, actions, and choices in cyber-warfare. We develop a novel model of cyber-warfare that is more comprehensive than previous work, removing these limitations in the process. Second, we present an algorithm for calculating the optimal strategy of the players in our model. We show that our model is capable of finding better solutions than previous work within seconds, making computer- time strategic reasoning a reality. We also provide new insights, compared to previous models, on the impact of optimal strategies.

Tiffany Bao is a Ph.D. student in CyLab advised by Professor David Brumley. Her research interest is cyber autonomy, which includes both binary analysis technique and game-theoretic strategy for computer security. She completed her B.S. in Computer Science at Peking University, China.

This is a practice talk for CSF 2017.

Studies of Internet censorship rely on an experimental technique called probing.  From a client within each country under investigation, the experimenter attempts to access network resources that are suspected to be censored, and records what happens.  The set of resources to be probed is a crucial, but often neglected, element of the experimental design.

We analyze the content and longevity of 758,191 webpages drawn from 22 different probe lists, of which 15 are alleged to be actual blacklists of censored webpages in particular countries, three were compiled using a priori criteria for selecting pages with an elevated chance of being censored, and four are controls.  We find that the lists have very little overlap in terms of specific pages.  Mechanically assigning a topic to each page, however, reveals common themes, and suggests that hand-curated probe lists may be neglecting certain frequently-censored topics.  We also find that pages on controversial topics tend to have much shorter lifetimes than pages on uncontroversial topics.  Hence, probe lists need to be continuously updated to be useful.

To carry out this analysis, we have developed automated infrastructure for collecting snapshots of webpages, weeding out irrelevant material (e.g. site “boilerplate” and parked domains), translating text, assigning topics, and detecting topic changes.  The system scales to hundreds of thousands of pages collected.

Zachary Weinberg was born in Los Angeles, CA in 1978, but escaped at the earliest opportunity.  He has spent the years since doing, variously, chemistry, C compiler maintenance, cognitive linguistics, distributed version control system development, Web browser development, and Web security, before a fateful internship at SRI in 2012 put him onto censorship circumvention and measurement.

This is a practice talk for PETS 2017

Failure to sufficiently identify computer security threats leads to missing security requirements and poor architectural decisions, resulting in vulnerabilities in cyber and cyber-physical systems.

Our prior research study evaluated three exemplar Threat Modeling Methods, designed on different principles, in order to understand strengths and weaknesses of each method. Our goal is to produce a set of tested principles which can help programs select the most appropriate TMMs. This will result in improved confidence in the cyber threats identified, accompanied by evidence of the conditions under which each technique is most effective. This presentation will describe the study, its results, and future plans.

Nancy R. Mead is a Fellow and Principal Researcher at the Software Engineering Institute (SEI). Mead is an Adjunct Professor of Software Engineering at Carnegie Mellon University. She is currently involved in the study of security requirements engineering and the development of software assurance curricula. She also served as director of software engineering education for the SEI from 1991 to 1994. Her research interests are in the areas of software security, software requirements engineering, and software architectures.

Prior to joining the SEI, Mead was a senior technical staff member at IBM Federal Systems, where she spent most of her career in the development and management of large real-time systems. She also worked in IBM's software engineering technology area and managed IBM Federal Systems' software engineering education department. She has developed and taught numerous courses on software engineering topics, both at universities and in professional education courses.

Mead authored more than 150 publications and invited presentations. She is a Fellow of the Institute of Electrical and Electronic Engineers, Inc. (IEEE) and the IEEE Computer Society, and is a Distinguished Educator of the Association of Computing Machinery. She received the 2015 Distinguished Education Award from the IEEE Computer Society Technical Council on Software Engineering. The Nancy Mead Award for Excellence in Software Engineering Education is named for her and has been awarded since 2010, with Mary Shaw as the first recipient. Dr. Mead earned her PhD in mathematics from the Polytechnic Institute of New York BA and an MS in mathematics from New York University.

Self-driving vehicle technologies are expected to play a significant role in the future of transportation. One of the main challenges on public roads is the safe cooperation and collaboration among multiple vehicles using inter-vehicle communications. In particular, road intersections are serious bottlenecks of urban transportation, as more than 44% of all reported crashes for human-driven vehicles occur within intersection areas. At merge point, which corresponds to an intersection type, serious traffic accidents often occur, and collaboration and cooperation is critical in the area. In this work, we present a safe traffic protocol for use at merge points, where vehicles arriving from two lanes with different priorities must interleave to form a single lane of traffic. Simulation results show that our cooperative protocol has higher traffic throughput, compared to simple traffic protocols, while ensuring safety.

Shunsuke Aoki is a 2nd year Ph.D. student in the Department of Electrical and Computer Engineering advised by Professor Ragunathan (Raj) Rajkumar. His current research interests include Vehicular Communications and Cyber-Physical Systems. Prior to joining CMU in 2015, he worked as a Research Fellow with the Social Computing Group at Microsoft Research Asia. He received his B. Eng. from Waseda University in 2012, and MSc from The University of Tokyo in 2014.

This is a practice for the ECE Qualification Exam.

As espionage becomes more prominent in cyberspace, a nascent industry has been born to investigate and mitigate cyberespionage campaigns. Financial incentives have established a structure for this industry that runs counter to the rules of the great game, by naming and shaming countries in their most sensitive operations. As these companies move their work under the cover of NDAs to avoid inflaming political sensitivities, who will rise to solve the validation crisis and keep threat intelligence producers honest? This talk will discuss the evolution of the threat intelligence production space and the role that academia can play within it.

Juan Andrés Guerrero joined GReAT in 2014 to focus on targeted attacks. Before joining Kaspersky, he worked as Senior Cybersecurity and National Security Advisor to the President of Ecuador. Juan Andrés comes from a background of specialized research in Philosophical Logic. His latest publications include 'The Ethics and Perils of APT Research: An Unexpected Transition Into Intelligence Brokerage' and ‘Wave Your False Flags: Deception Tactics Muddying Attribution in Targeted Attacks’.

Current network deployments use specialized, standalone appliances or “middleboxes” to execute a variety of Network Functions (NFs) and policies. In this context, Network Functions Virtualization (NFV) is a new networking concept that promises to bring about a paradigm shift in middlebox implementation by replacing traditional middleboxes with software implementations of NFs running on commodity hardware. In this work, we present and discuss the recent technological advances that facilitate the move towards NFV and software-based packet processing. We specifically focus on the challenge of achieving predictable network performance in the context of NFV and discuss the limitations of previous research. We then introduce a new metric, namely the Cache Access Rate of a NF, to enable more accurate performance prediction of NFs and briefly discuss our plans for future work.

Antonis Manousis is a 2nd year PhD student in the department of Electrical and Computer Engineering advised by Professor Vyas Sekar. His current research interests include computer networks with a focus on performance of network middleboxes. Antonis received his undergraduate degree from the National Technical University of Athens in 2015.

Practice talk.

In modern CMOS-based processors, static leakage power dominates the total power consumption. This phenomenon necessitates the use of sleep states to save energy. In this paper, we discuss the use of partitioned fixed-priority energy-saving schedulers, which utilize a processor's state to save energy. The techniques presented rely on an enhanced version of Energy-Saving Rate-Harmonized Scheduling (ES-RHS), and Energy-Saving Rate-Monotonic Scheduling (ES-RMS) to maximize the time the processor spends in the lowest-power deep-sleep state. In some modern multi-core processors, all cores need to transition synchronously into deep sleep. For this class of processors, we present a partitioning technique called task information, to maximize the synchronous deep-sleep duration across all processing cores. We also illustrate the benefits of using ES-RMS over ES-RHS for this class of processors. For processors which allow cores to individually transition into deep sleep, we show that, while utilizing ES-RHS on each core, any feasible partition can optimally utilize all of the processor's idle durations to put it into deep sleep. Experimental evaluations show that the proposed techniques can provide significant energy savings.

Sandeep D'souza is a 2nd year Ph.D. student in the Department of Electrical and Computer Engineering, and works with Professor Ragunathan (Raj) Rajkumar. His current research interests include real-time systems theory and designing scalable distributed Cyber-Physical Systems. He received his B.Tech in Electronics and Electrical Communication Engineering from the Indian Institute of Technology Kharagpur in 2015.


Subscribe to CyLab