Game theory is about decision making. How and why a party makes a decision, how one party's decision will affect the others' decisions, and how multiple players eventually come up (or fail to come up) with a stable solution for the game. It is a fancinating theory that gives insights into conflict, cooperation, deception and trust between intelligent rational decision makers.
I started my research on game theory for security in 2015. What motivated me to explore this direction was the verifiability problem in cloud computing. Cloud computing is a service provided by an external party, thus it is difficult for the client to fully trust the cloud provider. Should the cloud return a wrong result for a mission-critical task, the consequence would be disastrous. To exercise due diligence and gain greater confidence in computation outsourced to the cloud, clients need to be able to verify the correctness of the results returned. The main problem of existing verifiable computation techniques is that they all have a high overhead (including our work in FC16), thus if being deployed in the clouds, would render cloud computing more expensive than the on-premises counterpart.
To achieve verifiability at a reasonable cost, we leveraged game theory and proposed a smart contract based solution. In a nutshell, a client lets two clouds compute the same task, and uses smart contracts to stimulate tension, betrayal and distrust between the clouds, so that rational clouds will not collude and cheat. In the absence of collusion, verification of correctness can be done easily by crosschecking the results from the two clouds. By resorting to game theory and smart contracts, we are able to avoid heavy cryptographic protocols. The client only needs to pay two clouds to compute in the clear, and a small transaction fee to use the smart contracts. We also conducted a feasibility study that involves implementing the contracts in Solidity and running them on the official Ethereum network. See our paper in CCS17.
Secure computation is a transformative technology that has enormous potential. Traditionally, an organisation can lock their data in secure storage and process it within an in-house facility operated by trusted staff. But increasingly, data processing is moving out of the trusted zone and security mechanisms that used to be effective do not work any more. Secure computation promises a solution to this pressing challenge and will completely change the landscape of cyber security. Secure computation allows for computation of arbitrary functions directly on encrypted data and hides all information about the data against untrusted parties, even if the untrusted parties are involved in the computation. Secure computation could enable, for example, companies to outsource and process their data in untrusted clouds, or two mutually untrusted parties to mine their datasets together. However, secure computation currently is often too inefficient to be practical. Despite recent huge improvements, secure computation is still tens of thousand to billions times slower than computation in the clear. This becomes a major impediment to widespread use of secure computation.
I started my research on efficient secure computation in 2011. I tackle this problem from complete new angles and have developed two novel approaches:
A major application domain of secure
computation is Cloud Computing. The cloud is run and managed by an
external party whose interests may not fully align with those of its
clients. So, it is difficult for the clients to fully trust the cloud
with their sensitive data. An obvious solution is client side
encryption. However, this only works if the cloud acts merely as a
remote data storage server. If the clients want the cloud to both
store the data and use the data as input of certain computation tasks,
simple encryption does not work. We have been working on protocols for
secure delegated cloud computation. The aim is to allow the cloud to
store blinded data and use it for computation without learning anything
about the data, and allow the clients to control the computation and
verify the integrity of the computation results. See our papers in SEC15, FC16,TDSC(2017).
In the past, I worked on searchable encryption. I designed the first searchable encryption scheme that allows multiple users to read/write/search data securely without having to share the cryptographic keys. The paper was published in DBSec 2008 conference and was selected as best paper. An extended version was published in Journal of Computer Security.
I also developed a cryptographic location sharing protocol for smart phones that eases privacy concerns by making it possible to share a user's location data blindly and allowing the user to control who can access his location, when and to what degree of precision. The result was published in IFIPTM 2011.
I also worked on designing a cryptographic protocol for location and time based authentication. The protocol aimed to prevent fake cultural assets from being displayed in exhibitions and being sold in auctions. The system has been implemented and validated on real case studies. The result was published in IFIPTM 2008 and Personal and Ubiquitous Computing journal.
I developed a protocol for credential verification in non-monotonic trust management. The protocol allows non-monotonic trust evaluation without revealing additional information. The result was published in MMM-ACNS 2007.
My PhD thesis was about a non-monotonic trust management system. I designed a policy language based on the bilattice theory and the any world assumption. The bilattice theory allows us to represent and reason with imperfect information (e.g. incomplete or inconsistent information). The any world assumption gives us the power to represent the background information as non-uniform assumptions. The result was also published in IFIPTM 2010.
An interesting observation is that trust is not always transitive. A recommendation may or may not enable you to trust another entity. I developed a modal logic that captures how trust and beliefs evolve in distributed systems. The logic allows us to identify the key constraints on trust transitivity. The result was published in IFIPTM 2007.
In large policy-based access control systems, conflicts often arise. An access request can be authorised and denied at the same time by different policy rules. The system must deal with conflicts, ideally through an automated resolution mechanism. I was a main contributor to the conflict resolution sub-system for the Ponder2 policy framework, one of the most advanced and powerful policy system in the world. The sub-system can automatically resolve conflicts at runtime. Rather than hard-coding the conflict resolution strategies into the system, my solution allows the administrators to encode the conflict resolution strategies as simple logic based meta-policies. This makes conflict resolution more flexible and can be adapted according to different requirements. The result was published in Policy 2007 and DSOM 2008.
Access control in healthcare system is difficult because conventional mechanisms are too rigid and often lack the capability to deal with exceptions. To this end, I explored the idea of workflow-based access control. The workflow-based access model focuses on procedures rather than individuals. The idea is that need-to-know is captured by workflows that are widely used in medical practice, and entities receive privileges only when they are assigned to and running a certain task in the workflow. This allows flexibility and also maximise the security. The result was published in AHM 2007, PervasiveHealth 2008, and FINA 2008. The workflow model is also enhanced with patient consents to allow secondary use of patient data. The result was published in Policy 2008.
2015 - present, Roberto Metere (supervisor)
2017 - present, Raffaello Perrotta (2nd supervisor)
2013 - 2016, Aydin Abadi (supervisor)
2011 – 2013 Richard Harker, (2nd supervisor)
2017 Alistair Ridley
2012 – 2013 Zikai Wen (Now a PhD student at Conell University)