The Ironclad project at Microsoft Research is using a set of new and modified tools based on automated theorem proving to build Ironclad services.  An Ironclad service guarantees to remote parties that every CPU instruction the service executes adheres to a high-level specification, convincing clients that the service will be worthy of their trust.  To provide such end-to-end guarantees, we built a full stack of verified software.  That software includes a verified kernel; verified drivers; verified system and cryptography libraries including SHA, HMAC, and RSA; and four Ironclad Apps.  As a concrete example, our Ironclad database provably provides differential privacy to its data contributors.  In other words, if a client encrypts her personal data with the database's public key, then it can only be decrypted by software that guarantees, down to the assembly level, that it preserves differential privacy when releasing aggregate statistics about the data. 

We then expanded the scope of our verification efforts to distributed systems, which are notorious for harboring subtle bugs.  We have developed IronFleet, a methodology for building practical and provably correct distributed systems.  We demonstrated the methodology on a complex implementation of a Paxos-based replicated state machine library and a lease-based sharded key-value store.  We proved that each obeys a concise safety specification, as well as desirable liveness requirements.  Each implementation achieves performance competitive with a reference system. 

In this talk, we describe our methodology, formal results, and lessons we learned from building large stacks of verified systems software.  In pushing automated verification tools to new scales (over 70K lines of code and proof so far), our team has both benefited from automated verification techniques and uncovered new challenges in using them.

By continuing to push verification tools to larger and more complex systems, Ironclad ultimately aims to raise the standard for security- and reliability-critical systems from "tested" to "correct".

Bryan Parno is a Researcher in the Security and Privacy Group at Microsoft Research.  After receiving a Bachelor's degree from Harvard College, he completed his PhD at Carnegie Mellon University, where his dissertation won the 2010 ACM Doctoral Dissertation Award.  In 2011, he was selected for Forbes' 30-Under-30 Science List. 

He formalized and worked to optimize verifiable computation, receiving a Best Paper Award at the IEEE Symposium on Security and Privacy his advances.  He coauthored a book on Bootstrapping Trust in Modern Computers, and his work in that area has been incorporated into the latest security enhancements in Intel CPUs. His research into security for new application models was incorporated into Windows and received a Best Paper Awards at the IEEE Symposium on Security and Privacy and the USENIX Symposium on Networked Systems Design and Implementation.  He has recently extended his interest in bootstrapping trust to the problem of building practical, formally verified secure systems. His other research interests include user authentication, secure network protocols, and security in constrained environments (e.g., RFID tags, sensor networks, or vehicles).

This session introduces essential ingredients for any cyber security program called the Three T’s of Cyber Security: Talent, Tools and Techniques. Jim Routh, the CSO for Aetna and board member of both the NH-ISAC and FS-ISAC will share his perspective on which of the three T’s is the most significant. He will share specific processes and methods in place today for Aetna demonstrating the importance of “un-conventional” controls to change the rules for threat adversaries providing specific examples of innovative use of early stage technology solutions.

Jim Routh is the Chief Security Officer and leads the Global Information Security function for Aetna. He is the Chairman of the National Health ISAC and a Board member of the FS-ISAC. He was formerly the Global Head of Application & Mobile Security for JP Morgan Chase. Prior to that he was the CISO for KPMG, DTCC and American Express and has over 30 years of experience in information technology and information security as a practitioner. He was the Information Security Executive of the Year winner for the Northeast in 2009 and the Information Security Executive of the Year in 2014 in North America for Healthcare.

Traffic correlation attacks to de-anonymize Tor users are possible when an adversary is in a position to observe traffic entering and exiting the Tor network. Recent work has brought attention to the threat of these attacks by network-level adversaries (e.g., Autonomous Systems). We perform a historical analysis to understand how the threat from AS-level traffic correlation attacks has evolved over the past five years. We find that despite a large number of new relays added to the Tor network, the threat has grown. This points to the importance of increasing AS-level diversity in addition to capacity of the Tor network. We identify and elaborate on common pitfalls of AS-aware Tor client design and construction. We find that succumbing to these pitfalls can negatively impact three major aspects of an AS-aware Tor client -- (1) security against AS-level adversaries, (2) security against relay-level adversaries, and (3) performance. Finally, we propose and evaluate a Tor client -- Cipollino -- which avoids these pitfalls using state-of-the-art in network-measurement. Our evaluation shows that Cipollino is able to achieve better security against network-level adversaries while maintaining security against relay-level adversaries and performance characteristics comparable to the current Tor client.

Phillipa Gill is an assistant professor in the Computer Science Department at the University of Massachusetts, Amherst. Her work focuses on many aspects of computer networking and security with a focus on designing novel network measurement techniques to understand online information controls, network interference, and interdomain routing. She currently leads the ICLab project which is working to develop a network measurement platform specifically for online information controls. She was recently included on N2Women’s list of 10 women in networking to watch. She has received the NSF CAREER award, Google Faculty Research Award and best paper awards at the ACM Internet Measurement Conference (characterizing online aggregators), and Passive and Active Measurement Conference (characterizing interconnectivity of large content providers).

Despite the reported attacks on critical systems, operational techniques such as malware analysis are not used to inform early lifecycle activities, such as security requirements engineering.  In our CERT research, we speculated that malware analysis reports (found in databases such as Rapid 7), could be used to identify misuse cases that pointed towards overlooked security requirements.  If we could identify such requirements, we thought they could be incorporated into future systems that were similar to those that were successfully attacked.  We defined a process, and then sponsored a CMU MSE Studio Project to develop a tool.   We had hoped that the malware report databases were amenable to automated processing, and that they would point to flaws such as those documented in the CWE and CAPEC databases.  It turned out to not be so simple.  This talk will describe our initial proposal, the MSE Studio project and tool, student projects at other universities, and the research remaining to be done in both the requirements and architecture areas.

Nancy R. Mead is a Fellow and Principal Researcher at the Software Engineering Institute (SEI).  Mead is an Adjunct Professor of Software Engineering at Carnegie Mellon University.  She is currently involved in the study of security requirements engineering and the development of software assurance curricula.  She also served as director of software engineering education for the SEI from 1991 to 1994. Her research interests are in the areas of software security, software requirements engineering, and software architectures.

Prior to joining the SEI, Mead was a senior technical staff member at IBM Federal Systems, where she spent most of her career in the development and management of large real-time systems.  She also worked in IBM's software engineering technology area and managed IBM Federal Systems' software engineering education department.  She has developed and taught numerous courses on software engineering topics, both at universities and in professional education courses.

Mead is a Fellow of the Institute of Electrical and Electronic Engineers, Inc. (IEEE) and the IEEE Computer Society, and a Distinguished Member of the ACM. She received the 2015 Distinguished Education Award from the IEEE Computer Society Technical Council on Software Engineering.  Mead has more than 150 publications and invited presentations, and presently serves on the Editorial Board for the International Journal on Secure Software Engineering. She has been a member of numerous editorial boards, advisory boards and committees. Dr. Mead earned here PhD in mathematics from the Polytechnic Institute of New York and BA and MS in mathematics from New York University.

The strategic miscalculation of Iraq’s Weapons of Mass Destruction (WMD) threat in 2003 provides a staggering example of how even very experienced leaders can be blinded by the foundational psychological effects that give rise to bias.  This historical example further begs the question, ‘Could modern predictive analytics, such as machine learning, close the WMD information gap, if faced today?’

Army leaders want to understand the benefits and limitations of advancements in predictive analytics as well as in behavioral psychology in order to understand the implications for decision-making competence.  U.S. commanders need both a structured approach for decision-making (ways), and the ability to leverage advanced analytical capability (means) in order to achieve operational understanding (ends).  This talk offers a structured approach to decision-making that embeds a methodology for Red Teaming to address foundational behavioral psychology effects. 

In addition, I will offer a strategy for deploying tailored technical teams to provide commanders with access to relevant data, resources and skills to perform advanced analytical methods, including machine learning.  It is in applying technological advances in big data to the crucible of ground combat that the Army can fulfill its role for the nation, and maintain competitive advantage.

Colonel Mary Lou Hall is a United States Army War College Fellow in the Institute for Politics and Strategy in the Dietrich College at Carnegie Mellon University.  Most recently, she served as a political-military analyst on the Joint Staff, J-8, in the Studies Analysis and Gaming Division. A native of Richmond, Virginia, Colonel Hall graduated from West Point in 1992 and has served in a variety of personnel, manpower and operations research assignments in several locations including Fort Lewis, WA, Camp Casey, Republic of South Korea, West Point, NY and Kabul, Afghanistan, as well as on the Army and Joint Staffs in Washington, D.C. She holds a BS in Mechanical Engineering from the United States Military Academy and Masters degrees in Engineering Management from Saint Martin’s University in Lacey, WA, and in Operations Analysis from the Naval Postgraduate School, Monterey, CA. Colonel Hall specializes in Operations Research because she is passionate about making better decisions. Colonel Hall has been married to Colonel Andrew Hall since 1992 and they have two children, Cadet Catherine Hall (USMA ‘19) and Oscar, who is a freshman at Georgetown Preparatory High School.

Over 300 years ago, an English carpenter realized that the key to safely navigating the ocean was being able to precisely measure time.  He dedicated his life to building clocks that remained steady in rough water and across a wide range of temperatures. Since then, timing and localization technologies have continued to push the limits of technology resulting in systems like GPS and instruments that are able peer into the nature of gravitational waves.   Unfortunately, existing localization technologies based on satellites and WiFi tend to perform poorly indoors or in urban environments. In the context of enclosed spaces, precise synchronization and localization has the potential to enable applications ranging from asset tracking, indoor navigation and augmented reality all the way to highly optimized beam forming for improved spatial capacity of wireless networks and enhancing network security.

In this talk, I will provide a brief overview of the state-of-the-art with respect to indoor location tracking and discuss two new systems that that are able to precisely localize mobile phones as well as low-power tags.  The first is a hybrid Bluetooth low-energy and near ultrasonic beaconing platform that is able to provide sub-meter accuracy to standard smartphones.  The platform leverages the phone’s IMU as well as constraints derived from building floor plans to not only localize its self, but also apply range-based SLAM techniques for bootstrapping its own infrastructure. The second platform leverages emerging Chip Scale Atomic Clocks (CSACs) and ultra wide-band (UWB) radios to create distributed networks that are able to coordinate at a level that used to be only possible with large, power-hungry and cost prohibitive atomic clocks. With sub-nanosecond time synchronization accuracy and extremely low drift rates, it is possible to dramatically reduce communication guard-bands and perform accurate speed-of-light Time-of-Arrival (TOA) measurements across distributed wireless networks.

Anthony Rowe is an Associate Professor in the Electrical and Computer Engineering Department at Carnegie Mellon University. His research interests are in networked real-time embedded systems with a focus on wireless communication. His most recent projects have related to large-scale sensing for critical infrastructure monitoring, indoor localization, building energy-efficiency and technologies for microgrids. His past work has led to dozens of hardware and software systems, four best paper awards and several widely adopted open-source research platforms. He earned a Ph.D in Electrical and Computer Engineering from CMU in 2010, received the Lutron Joel and Ruth Spira Excellence in Teaching Award in 2013 and the CMU CIT Early Career Fellowship and the Steven Ferves Award for Systems Research in 2015.

Today’s consumer-facing online services are measured by the size and growth of their user account base, as users are both contributors of content as well as a channel for monetization.  Despite being their backbone, these user accounts are also their “Achilles heel” — well-organized crime rings leverage compromised or fraudulent accounts to hide amongst billions of benign users, waging a variety of large-scale attacks.

In this talk, I will present the anatomy of modern attacks and the sophisticated attack techniques that we have observed across a number of services, including social networking, gaming, financial, ecommerce and other vertical markets.  I will then discuss the new challenges we face to defend against these attacks in the billion user era.  Finally I’ll outline the directions pursued by DataVisor through unsupervised big data analytics to detect and mitigate large attack campaigns early, without prior knowledge of attack patterns.

Yinglian Xie is the CEO and co-founder of DataVisor, a leading big-data fraud detection provider in identifying large attack campaigns before they conduct any damage.  She received her Ph.D. in Computer Science from CMU, and has been working in the area of Internet security and privacy for over 10 years.  Prior to founding DataVisor, Yinglian was a researcher at Microsoft Research Silicon Valley, where she successfully developed a series of new technologies in Microsoft products.  These include social graphing techniques for user authentication in Microsoft Hotmail and algorithms to detect fraudulent transactions for Xbox that saved the company millions of dollars per year.  She holds over 20 patents for her work and has been widely published in top conferences and served on the committees of many of them.  Yinglian’s work has helped improve the security of billions of online users.

Most computing systems still rely on user-chosen passwords to authenticate access to data and systems.  But passwords are hard to use, easy to guess, and tricky to securely store.  In practice one sees high failure rates of (legitimate) password login attempts, as well as a never-ending stream of damaging password database compromises.  I will present a sequence of new results that target making password authentication systems better.

We will look at how to address concerns in three areas: (1) usability by way of easy-to-deploy typo-tolerant password authentication validated using experiments at Dropbox; (2) hardening password storage against cracking attacks via our new Pythia crypto service; and, time allowing, (3) building cracking-resistant password vaults via a new cryptographic primitive called honey encryption.

The talk will cover joint work with Anish Athayle, Devdatta Akawhe, Joseph Bonneau, Rahul Chatterjee, and Ari Juels.

Thomas Ristenpart is an Associate Professor at Cornell Tech and a member of the Computer Science department at Cornell University. Before joining Cornell Tech in May 2015, he spent four years as an Assistant Professor at the University of Wisconsin-Madison. His research spans a wide range of computer security topics, with a recent focus on cloud computing security, as well as topics in applied and theoretical cryptography.

His work has been featured in numerous publications including the New York Times, The MIT Technology Review, ABC News, and U.S. News and World Report. He completed his Ph.D. at UC San Diego in 2010. His awards include the UC San Diego Computer Science and Engineering Department Dissertation Award, an NSF CAREER Award, the Best Paper Award at USENIX Security 2014, and a Sloan Research Fellowship.

The growing frequency and severity of cybersecurity incidents has led government and private sector organizations to seek better ways to protect their systems and information. Many of these organizations have begun by adopting risk management  frameworks as a way of structuring their approach to security. But risk management is only effective if it is informed by deep understanding of attacks and the ways to  defend against them. The history and structure of successful software security programs shows how technical understanding can be integrated into risk management decisions. This talk will summarize the history of a typical software security program and outline principles by which understanding of attacks and defenses combined with continuous improvement leads to effective risk management.

Steven B. Lipner is the creator and long-time leader of the Microsoft Security Development Lifecycle (SDL). The SDL was the first scalable and effective approach to achieving security assurance for large-scale software systems and has been applied by Microsoft and numerous other development organizations.

Early in his career, Mr. Lipner made contributions that helped set the direction of computer security research. He originated the approach of using a Virtual Machine Monitor to achieve multilevel security, and managed the team that developed the fundamental model for multilevel security and the first security kernel that implemented that model. He was a key industry contributor to the “Orange Book” that guided government evaluations of commercial operating system security. Mr. Lipner is a member of the National Cybersecurity Hall of Fame (Class of 2015).


Subscribe to CyLab