Thursday, January 25, 2018 - 09:30 to 16:45
Lecture Theatre F, Lecture Theatre Building, University of Surrey, Guildford, GU2 7XH.
Attendance is by invitation only. Registration is free.
Attendance is by invitation only. If you wish to get an invitation, please contact Professor Liqun Chen at email@example.com.
The Surrey Centre for Cyber Security (SCCS) is organising a week of activities focussing on Trusted Computing Research. A major focus of this week will be a one-day workshop on “Trusted Computing and its Applications”. Sponsorship from the UK National Cyber Security Centre (NCSC) has enabled us to invite three leading international researchers to provide keynote speeches. In addition, a set of PhD students and young researchers from the UK Academic Centres of Excellence in Cyber Security Research will present their work.
9:30am – Registration, coffee/tea
10:00am – Welcome from Professor Liqun Chen, SCCS, University of Surrey
10:10am – Professor Gene Tsudik, University of California, Irvine
11:10am – Coffee/tea
11:30am – Professor N Asokan, Aalto University
12:30pm – Lunch, a buffet luncheon will be provided for all participants in Wates House
14:00pm – Student and young researcher session
15:00pm – Coffee/tea
15:30pm – Professor Virgil Gligor, Carnegie Mellon University
16:30pm – Closing remarks
The research posters will be displayed during Coffee/Tea and at lunch time
If you would like to make a short talk in the student and young researcher session and/or provide a poster, please contact Mr. Jorden Whitefield at firstname.lastname@example.org. The travel expenses for all the speakers will be paid.
Professor N. Asokan
Title: Common-sense applications of hardware-based trusted execution environments
Abstract: Hardware-based trusted execution environments (TEEs) have been widely deployed in smartphones and tablets already for several years. Recently, Intel introduced Software Guard Extensions (SGX), which can be used to realize TEEs for the x86 architecture. Despite such broad availability, few applications actually make use of this functionality. Furthermore, researchers have been wary of using TEE technologies because of technical (e.g., possibility of side channels) and philosophical (e.g., control by device manufacturers, and the need to trust them) considerations. A common perception is that trusted execution environments, and more broadly, trusted computing technologies, were designed and deployed to limit the freedom of end users. In this talk, I will argue that there are a number of "common-sense" applications of TEEs that can benefit end users. I will describe two such applications from our recent research.
Bio: N. Asokan is a professor of Computer Science at Aalto University, Finland. His research interests are broadly in systems security. He is the lead academic PI of Intel Collaborative Research Center (http://www.icri-sc.org) in Finland and is the director of Helsinki-Aalto Center for Information Security (https://haic.aalto.fi). Asokan is an IEEE Fellow and an ACM Distinguished Scientist. More information about him and his research on his website (http://asokan.org/asokan/) or Twitter (@nasokan).
Professor Gene Tsudik
Title: Presence Attestation: The Missing Link In Dynamic Trust Bootstrapping
Abstract: Many popular modern processors include an important hardware security feature in the form of a DRTM (Dynamic Root of Trust for Measurement) that helps bootstrap trust and resists software attacks. However, despite substantial body of prior research on trust establishment, security of DRTM was treated without involvement of the human user, who represents a vital missing link. The basic challenge is: how can a human user determine whether an expected DRTM is currently active on her device?
In this work, we define the notion of “presence attestation”, which is based on mandatory, though minimal, user participation. We present three concrete presence attestation schemes: sight-based, location-based and scene-based. They vary in terms of security and usability features, and are suitable for different application contexts. After analyzing their security, we assess their performance based on prototype implementations.
This is joint work with: Z Zhang, X Ding, J Cui, and Z Li (Singapore Management University).
Bio: Gene Tsudik is a Chancellor’s Professor of Computer Science at UC Irvine (UCI).
He obtained his PhD in Computer Science from USC in 1991. Gene began his research
career at IBM Zurich Research Laboratory (1991-1996), followed by USC/ISI (1996-2000)
and UCI (since 2000). His research interests include(d) numerous topics in
security, privacy and applied cryptography. Between 2009 and 2016, he served as the
Editor-in-Chief of ACM Transactions on Information and Systems Security (TISSEC).
He’s a former Fulbright Scholar, Fulbright Specialist, Fellow of IEEE, ACM and AAAS,
as well as a foreign member of Academia Europaea. Gene is the recipient of 2017
ACM SIGSAC Outstanding Contribution Award. He is also the author of a the first
crypto-poem published at a refereed venue.
Professor Virgil Gligor
Title: Establishing and Maintaining Root of Trust on Commodity Computer Systems.
Abstract: Suppose that a trustworthy program must be booted on a commodity system that may contain persistent malware. For example, a formally verified micro-kernel, micro-hypervisor, or a subsystem obtained from a trustworthy provider must be booted on a computer system that runs Windows, Linux, IOS, or Android. Establishing root of trust (RoT) assures the system has all and only the content known to the user or the user discovers unaccounted content, with high probability. Hence, RoT establishment assures that the trusted program boot takes place in a malware-free state, whp. Obtaining such an assurance is challenging because malware can survive in system states across repeated secure- and trusted-boot operations; e.g., these operations do not always have malware-unmediated access to device controllers’ processors and memories. I this presentation, I will illustrate both the theoretical and practical challenges of RoT establishment unconditionally; i.e., without secrets, privileged modules (e.g., TPMs, RoMs, HSMs), or adversary bounds.
Establishing root of trust is important because makes all persistent malware ephemeral and forces the adversary to repeat the malware-insertion attack, perhaps at some added cost. Nevertheless, some malware-controlled software can always be assumed to exist in commodity operating systems and applications. The inherent size and complexity of their components (aka the “giants”) render them vulnerable to successful attacks. In contrast, small and simple software components with rather limited function and high-assurance layered security properties (aka the “wimps”) can, in principle, be resistant to all attacks.
Maintaining root of trust assures a user that a commodity computer’s wimps are isolated from, and safely co-exist with, adversary-controlled giants. However, regardless how secure program isolation may be (e.g., based on Intel’s SGX), I/O channel isolation must also be achieved despite the pitfalls of commodity architectures that encourage I/O hardware sharing, not isolation. In this presentation I will illustrate the challenges of I/O channel isolation and present and approach that enables the co-existence secure wimps with insecure giants, via two examples of experimental systems; i.e., on-demand isolated I/O channels and a trusted display service, which were designed and implemented at CMU’s CyLab.
Bio: Virgil D. Gligor received his B.Sc., M.Sc., and Ph.D. degrees from the University of California at Berkeley. He taught at the University of Maryland between 1976 and 2007, and is currently a Professor of ECE at Carnegie Mellon University. Between 2007 and 2015 he was the co-Director of CyLab. Over the past forty years, his research interests ranged from access control mechanisms, penetration analysis, and denial-of-service protection, to cryptographic protocols and applied cryptography. Gligor was an editorial board member of several ACM and IEEE journals and the Editor in Chief of the IEEE Transactions on Dependable and Secure Computing. He received the 2006 National Information Systems Security Award jointly given by NIST and NSA, the 2011 Outstanding Innovation Award of the ACM SIG on Security Audit and Control, and the 2013 Technical Achievement Award of the IEEE Computer Society.