CyLab

We are clearly moving toward an Internet where encryption is ubiquitous—by some estimates, more than half of all Web traffic is HTTPS, and the number is growing. This is a win in terms of privacy and security, but it comes at the cost of functionality and performance, since encryption blinds middleboxes (devices like intrusion detection systems or web caches that process traffic in the network). In this talk I will describe two recent and ongoing projects exploring techniques for including middleboxes in secure sessions in a controlled manner. The first is a protocol, developed in collaboration with Telefónica Research and called Multi-Context TLS (mcTLS), that adds access control to TLS so that middleboxes can be added to a TLS session with restricted permissions. The second, which is ongoing work with Microsoft Research, explores bringing trusted computing technologies like Intel SGX to network middleboxes.

David Naylor is a sixth year Ph.D. student at Carnegie Mellon University, where he's advised by Peter Steenkiste. His primary research interests are computer networking, security, and privacy, but he’s also interested in Web measurement and performance . David is currently on the academic job market.

Algorithms in nature are simple and elegant yet ultimately sophisticated. All behaviors are connected to the basic instincts we take for granted. The biomorphic approach attempts to connect artificial intelligence to primitive intelligence. It explores the idea that a genuinely intelligent computers will be able to interact naturally with humans. To form the bridge, computers need the ability to recognize, understand, and even have instincts similar to living creatures. In this talk, I will introduce the theoretical models in my new book "Instinctive Computing" and a few real-world applications, including visual analytics of dynamic patterns of malware spreading, SQL and DDOS attacks, IoT data analysis in a smart building, speaker verification on mobile phones, privacy algorithms for the microwave imaging in airports and the privacy-aware smart windows for the autonomous light-rail transit vehicles in downtown Singapore.

Dr. Yang Cai is Senior Systems Scientist of CyLab, Director of Visual Intelligence Studio, and Associate Professor of Biomedical Engineering, Carnegie Mellon University, Pittsburgh. His research interests include steganography, machine intelligence, video analytics, interactive visualization of Big Data, biomorphic algorithms, medical imaging systems, and visual privacy algorithms. He has published 6 books including a new monograph "Instinctive Computing" (Springer, 2016) and a textbook "Ambient Diagnostics" (CRC, 2014). He also taught courses "Image, Video and Multimedia" (ECE18-798), "Cognitive Video" (ECE18-799K), “Clinical Practicum” (BME 42-790), "Human Algorithms (Fine Art 06-427), “Innovation Process” (HCI 05-899C), and the University-Wide Course "Creativity" (99-428). He was a Research Fellow in Studio for Creative Inquiry at School of Art and exhibited his artwork in Italy and U.S. He has been a volunteer scientist of 3D imaging at the archeology field school in Alps for ten years

Jam resistance for omnidirectional wireless networks is an important problem. Existing jam-resistant systems use a secret spreading sequence or secret hop sequence, or some other information that must be kept secret from the jammer. BBC coding is revolutionary in that it achieves jam resistance without any shared secret. BBC requires the use of a hash function that is fast and secure, but “secure” in a different sense than for standard cryptographic hashes. We present a potential hash function: Glowworm. For incremental hashes as used in BBC codes, it can hash a string of arbitrary length in 11 clock cycles. That is not 11 cycles per bit or 11 cycles per byte. That is 11 cycles to hash the entire string, given that the current string being hashed differs from the last in only an addition or deletion of its last bit. An exhaustive security proof has been done for 32 bit Glowworm.

Martin Carlisle is a teaching professor in the Carnegie Mellon University Information Networking Institute and a security researcher in CMU’s CyLab. Previously, he was a computer science professor at the United State Air Force Academy, Director of the Academy Center for Cyberspace Research, and founder and coach of the Air Force Academy Cyber Competition Team. Prof. Carlisle earned a PhD in Computer Science from Princeton University. His research interests include computer security, programming languages and computer science education.

He is the primary author of RAPTOR, an introductory programming environment used in universities and schools around the world.  He founded and coached the Air Force Academy Cyber Competition Team, which advanced four years to the National Collegiate Cyber Defense Competition.  He is an ACM Distinguished Educator, a Colorado Professor of the Year, and a recipient of the Arthur S. Flemming Award for Exceptional Federal Service.

Authors of malicious software, or malware, have a plethora of options when deciding how to protect their code from network defenders and malware analysts. For many static analyses, malware authors do not even need sophisticated obfuscation techniques to bypass detection, simply compiling with different flags or with a different compiler will suffice. We propose a new static analysis called CARDINAL that is tolerant of the differences in binaries introduced by compiling the same source code with different flags or with different compilers. We accomplished this goal by finding an invariant between these differences. The effective invariant we found is the number of arguments to a call, or callsite parameter cardinality (CPC). Per function, we concatenate all CPC's together and add the result into a Bloom filter. Signatures constructed in this manner can be quickly compared to each other using a Jaccard index to obtain a similarity score. We empirically tested our algorithm on a large corpus of transformed malware and found that using a threshold value of 0.15 for determining a positive or negative match yielded results with a 11% false negative rate and a 11% false positive rate. Overall, we both demonstrate that CPC's are a telling feature that can increase the efficacy of static malware analyses and point the way forward in static analyses.

Martin Carlisle is a teaching professor in the Carnegie Mellon University Information Networking Institute and a security researcher in CMU’s CyLab. Previously, he was a computer science professor at the United State Air Force Academy, Director of the Academy Center for Cyberspace Research, and founder and coach of the Air Force Academy Cyber Competition Team. Prof. Carlisle earned a PhD in Computer Science from Princeton University. His research interests include computer security, programming languages and computer science education.

He is the primary author of RAPTOR, an introductory programming environment used in universities and schools around the world.  He founded and coached the Air Force Academy Cyber Competition Team, which advanced four years to the National Collegiate Cyber Defense Competition.  He is an ACM Distinguished Educator, a Colorado Professor of the Year, and a recipient of the Arthur S. Flemming Award for Exceptional Federal Service.

The Ironclad project at Microsoft Research is using a set of new and modified tools based on automated theorem proving to build Ironclad services.  An Ironclad service guarantees to remote parties that every CPU instruction the service executes adheres to a high-level specification, convincing clients that the service will be worthy of their trust.  To provide such end-to-end guarantees, we built a full stack of verified software.  That software includes a verified kernel; verified drivers; verified system and cryptography libraries including SHA, HMAC, and RSA; and four Ironclad Apps.  As a concrete example, our Ironclad database provably provides differential privacy to its data contributors.  In other words, if a client encrypts her personal data with the database's public key, then it can only be decrypted by software that guarantees, down to the assembly level, that it preserves differential privacy when releasing aggregate statistics about the data. 

We then expanded the scope of our verification efforts to distributed systems, which are notorious for harboring subtle bugs.  We have developed IronFleet, a methodology for building practical and provably correct distributed systems.  We demonstrated the methodology on a complex implementation of a Paxos-based replicated state machine library and a lease-based sharded key-value store.  We proved that each obeys a concise safety specification, as well as desirable liveness requirements.  Each implementation achieves performance competitive with a reference system. 

In this talk, we describe our methodology, formal results, and lessons we learned from building large stacks of verified systems software.  In pushing automated verification tools to new scales (over 70K lines of code and proof so far), our team has both benefited from automated verification techniques and uncovered new challenges in using them.

By continuing to push verification tools to larger and more complex systems, Ironclad ultimately aims to raise the standard for security- and reliability-critical systems from "tested" to "correct".

Bryan Parno is a Researcher in the Security and Privacy Group at Microsoft Research.  After receiving a Bachelor's degree from Harvard College, he completed his PhD at Carnegie Mellon University, where his dissertation won the 2010 ACM Doctoral Dissertation Award.  In 2011, he was selected for Forbes' 30-Under-30 Science List. 

He formalized and worked to optimize verifiable computation, receiving a Best Paper Award at the IEEE Symposium on Security and Privacy his advances.  He coauthored a book on Bootstrapping Trust in Modern Computers, and his work in that area has been incorporated into the latest security enhancements in Intel CPUs. His research into security for new application models was incorporated into Windows and received a Best Paper Awards at the IEEE Symposium on Security and Privacy and the USENIX Symposium on Networked Systems Design and Implementation.  He has recently extended his interest in bootstrapping trust to the problem of building practical, formally verified secure systems. His other research interests include user authentication, secure network protocols, and security in constrained environments (e.g., RFID tags, sensor networks, or vehicles).

This session introduces essential ingredients for any cyber security program called the Three T’s of Cyber Security: Talent, Tools and Techniques. Jim Routh, the CSO for Aetna and board member of both the NH-ISAC and FS-ISAC will share his perspective on which of the three T’s is the most significant. He will share specific processes and methods in place today for Aetna demonstrating the importance of “un-conventional” controls to change the rules for threat adversaries providing specific examples of innovative use of early stage technology solutions.

Jim Routh is the Chief Security Officer and leads the Global Information Security function for Aetna. He is the Chairman of the National Health ISAC and a Board member of the FS-ISAC. He was formerly the Global Head of Application & Mobile Security for JP Morgan Chase. Prior to that he was the CISO for KPMG, DTCC and American Express and has over 30 years of experience in information technology and information security as a practitioner. He was the Information Security Executive of the Year winner for the Northeast in 2009 and the Information Security Executive of the Year in 2014 in North America for Healthcare.

Traffic correlation attacks to de-anonymize Tor users are possible when an adversary is in a position to observe traffic entering and exiting the Tor network. Recent work has brought attention to the threat of these attacks by network-level adversaries (e.g., Autonomous Systems). We perform a historical analysis to understand how the threat from AS-level traffic correlation attacks has evolved over the past five years. We find that despite a large number of new relays added to the Tor network, the threat has grown. This points to the importance of increasing AS-level diversity in addition to capacity of the Tor network. We identify and elaborate on common pitfalls of AS-aware Tor client design and construction. We find that succumbing to these pitfalls can negatively impact three major aspects of an AS-aware Tor client -- (1) security against AS-level adversaries, (2) security against relay-level adversaries, and (3) performance. Finally, we propose and evaluate a Tor client -- Cipollino -- which avoids these pitfalls using state-of-the-art in network-measurement. Our evaluation shows that Cipollino is able to achieve better security against network-level adversaries while maintaining security against relay-level adversaries and performance characteristics comparable to the current Tor client.

Phillipa Gill is an assistant professor in the Computer Science Department at the University of Massachusetts, Amherst. Her work focuses on many aspects of computer networking and security with a focus on designing novel network measurement techniques to understand online information controls, network interference, and interdomain routing. She currently leads the ICLab project which is working to develop a network measurement platform specifically for online information controls. She was recently included on N2Women’s list of 10 women in networking to watch. She has received the NSF CAREER award, Google Faculty Research Award and best paper awards at the ACM Internet Measurement Conference (characterizing online aggregators), and Passive and Active Measurement Conference (characterizing interconnectivity of large content providers).

Despite the reported attacks on critical systems, operational techniques such as malware analysis are not used to inform early lifecycle activities, such as security requirements engineering.  In our CERT research, we speculated that malware analysis reports (found in databases such as Rapid 7), could be used to identify misuse cases that pointed towards overlooked security requirements.  If we could identify such requirements, we thought they could be incorporated into future systems that were similar to those that were successfully attacked.  We defined a process, and then sponsored a CMU MSE Studio Project to develop a tool.   We had hoped that the malware report databases were amenable to automated processing, and that they would point to flaws such as those documented in the CWE and CAPEC databases.  It turned out to not be so simple.  This talk will describe our initial proposal, the MSE Studio project and tool, student projects at other universities, and the research remaining to be done in both the requirements and architecture areas.

Nancy R. Mead is a Fellow and Principal Researcher at the Software Engineering Institute (SEI).  Mead is an Adjunct Professor of Software Engineering at Carnegie Mellon University.  She is currently involved in the study of security requirements engineering and the development of software assurance curricula.  She also served as director of software engineering education for the SEI from 1991 to 1994. Her research interests are in the areas of software security, software requirements engineering, and software architectures.

Prior to joining the SEI, Mead was a senior technical staff member at IBM Federal Systems, where she spent most of her career in the development and management of large real-time systems.  She also worked in IBM's software engineering technology area and managed IBM Federal Systems' software engineering education department.  She has developed and taught numerous courses on software engineering topics, both at universities and in professional education courses.

Mead is a Fellow of the Institute of Electrical and Electronic Engineers, Inc. (IEEE) and the IEEE Computer Society, and a Distinguished Member of the ACM. She received the 2015 Distinguished Education Award from the IEEE Computer Society Technical Council on Software Engineering.  Mead has more than 150 publications and invited presentations, and presently serves on the Editorial Board for the International Journal on Secure Software Engineering. She has been a member of numerous editorial boards, advisory boards and committees. Dr. Mead earned here PhD in mathematics from the Polytechnic Institute of New York and BA and MS in mathematics from New York University.

The strategic miscalculation of Iraq’s Weapons of Mass Destruction (WMD) threat in 2003 provides a staggering example of how even very experienced leaders can be blinded by the foundational psychological effects that give rise to bias.  This historical example further begs the question, ‘Could modern predictive analytics, such as machine learning, close the WMD information gap, if faced today?’

Army leaders want to understand the benefits and limitations of advancements in predictive analytics as well as in behavioral psychology in order to understand the implications for decision-making competence.  U.S. commanders need both a structured approach for decision-making (ways), and the ability to leverage advanced analytical capability (means) in order to achieve operational understanding (ends).  This talk offers a structured approach to decision-making that embeds a methodology for Red Teaming to address foundational behavioral psychology effects. 

In addition, I will offer a strategy for deploying tailored technical teams to provide commanders with access to relevant data, resources and skills to perform advanced analytical methods, including machine learning.  It is in applying technological advances in big data to the crucible of ground combat that the Army can fulfill its role for the nation, and maintain competitive advantage.

Colonel Mary Lou Hall is a United States Army War College Fellow in the Institute for Politics and Strategy in the Dietrich College at Carnegie Mellon University.  Most recently, she served as a political-military analyst on the Joint Staff, J-8, in the Studies Analysis and Gaming Division. A native of Richmond, Virginia, Colonel Hall graduated from West Point in 1992 and has served in a variety of personnel, manpower and operations research assignments in several locations including Fort Lewis, WA, Camp Casey, Republic of South Korea, West Point, NY and Kabul, Afghanistan, as well as on the Army and Joint Staffs in Washington, D.C. She holds a BS in Mechanical Engineering from the United States Military Academy and Masters degrees in Engineering Management from Saint Martin’s University in Lacey, WA, and in Operations Analysis from the Naval Postgraduate School, Monterey, CA. Colonel Hall specializes in Operations Research because she is passionate about making better decisions. Colonel Hall has been married to Colonel Andrew Hall since 1992 and they have two children, Cadet Catherine Hall (USMA ‘19) and Oscar, who is a freshman at Georgetown Preparatory High School.

Over 300 years ago, an English carpenter realized that the key to safely navigating the ocean was being able to precisely measure time.  He dedicated his life to building clocks that remained steady in rough water and across a wide range of temperatures. Since then, timing and localization technologies have continued to push the limits of technology resulting in systems like GPS and instruments that are able peer into the nature of gravitational waves.   Unfortunately, existing localization technologies based on satellites and WiFi tend to perform poorly indoors or in urban environments. In the context of enclosed spaces, precise synchronization and localization has the potential to enable applications ranging from asset tracking, indoor navigation and augmented reality all the way to highly optimized beam forming for improved spatial capacity of wireless networks and enhancing network security.

In this talk, I will provide a brief overview of the state-of-the-art with respect to indoor location tracking and discuss two new systems that that are able to precisely localize mobile phones as well as low-power tags.  The first is a hybrid Bluetooth low-energy and near ultrasonic beaconing platform that is able to provide sub-meter accuracy to standard smartphones.  The platform leverages the phone’s IMU as well as constraints derived from building floor plans to not only localize its self, but also apply range-based SLAM techniques for bootstrapping its own infrastructure. The second platform leverages emerging Chip Scale Atomic Clocks (CSACs) and ultra wide-band (UWB) radios to create distributed networks that are able to coordinate at a level that used to be only possible with large, power-hungry and cost prohibitive atomic clocks. With sub-nanosecond time synchronization accuracy and extremely low drift rates, it is possible to dramatically reduce communication guard-bands and perform accurate speed-of-light Time-of-Arrival (TOA) measurements across distributed wireless networks.

Anthony Rowe is an Associate Professor in the Electrical and Computer Engineering Department at Carnegie Mellon University. His research interests are in networked real-time embedded systems with a focus on wireless communication. His most recent projects have related to large-scale sensing for critical infrastructure monitoring, indoor localization, building energy-efficiency and technologies for microgrids. His past work has led to dozens of hardware and software systems, four best paper awards and several widely adopted open-source research platforms. He earned a Ph.D in Electrical and Computer Engineering from CMU in 2010, received the Lutron Joel and Ruth Spira Excellence in Teaching Award in 2013 and the CMU CIT Early Career Fellowship and the Steven Ferves Award for Systems Research in 2015.

Pages

Subscribe to CyLab