Keynote Speakers
How Private is Machine Learning?
Abstract:
A machine learning model is private if it doesn't reveal (too much) about its training data. This three-part talk examines to what extent current networks are private. Standard models are not private.
We develop an attack that extracts rare training examples (for example, individual people's names,
phone numbers, or addresses) out of GPT-2, a language model trained on gigabytes of text from the Internet.
As a result there is a clear need for training models with privacy-preserving techniques. We show that InstaHide, a recent candidate, is not private. We develop a complete break of this scheme and
can again recover verbatim inputs.
Fortunately, there exists provably-correct "differentially-private" training that guarantees no adversary could ever succeed at the above attacks. We develop techniques to that allow us to empirically evaluate the privacy offered by such schemes, and find they may be more private than can be proven formally.
Biography: Nicholas Carlini is a research scientist at Google Brain. He studies the security and privacy of machine learning, for which he has received best paper awards at ICML and IEEE S&P. He obtained his PhD from the University of California, Berkeley in 2018.
Francois-Xavier Standaert, UCL Crypto Group, Catholic University of Louvain-la-Neuve, Belgium
Evaluating and Designing Against Side-Channel Leakage: White Box or Black Box?
Abstract:
Side-channel analysis is an important concern for the security of cryptographic implementations, and may lead to powerful key recovery attacks if no countermeasures are deployed. Therefore, various types of protection mechanisms have been proposed over the last 20 years. In view of the cost and performance overheads caused by these protections, their fair evaluation and scarce use are a primary concern for hardware and software designers. Yet, the physical nature of side-channel analysis also renders the security evaluation of cryptographic implementations very different from the one of cryptographic algorithms against mathematical cryptanalysis. That is, while the latter can be quantified based on (well-defined) time, data and memory complexities, the evaluation of side-channel security additionally requires to quantify the informativeness and exploitability of the physical leakages. This implies that a part of these security evaluations is inherently heuristic and dependent on engineering expertise. It also raises the question of the capabilities given to the adversary/evaluator. For example, should she get full (unrestricted) access to the implementation to gain a precise understanding of its functioning (which I will denote as the white box approach) or should she be more restricted? In this talk, I will argue that a white box approach is not only desirable in order to avoid designing and evaluating implementations with a “false sense of security” but also that such designs become feasible in view of the research progresses made over the last two decades.
Biography: Francois-Xavier Standaert was born in Brussels, Belgium in 1978. He received the Electrical Engineering degree and PhD degree from the Universite catholique de Louvain, respectively in 2001 and 2004. In 2004-2005, he was a Fulbright visiting researcher at Columbia University, Department of Computer Science, Crypto Lab (hosted by Tal G. Malkin and Moti Yung) and at the MIT Medialab, Center for Bits and Atoms (hosted by Neil Gershenfeld). In 2006, he was a founding member of IntoPix s.a. From 2005 to 2008, he was a post-doctoral researcher of the Belgian Fund for Scientific Research (FNRS-F.R.S.) at the UCL Crypto Group and a regular visitor of the two aforementioned laboratories. Since 2008 (resp. 2017), he is associate researcher (resp. senior associate researcher) of the Belgian Fund for Scientific Research (FNRS-F.R.S). Since 2013 (resp. 2018), he is associate professor (resp. professor) at the UCL Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM). In 2010, he was program co-chair of CHES (which is the flagship workshop on cryptographic hardware). In 2021, he will be program co-chair of EUROCRYPT (one of the flagship IACR conferences). In 2011, he was awarded a Starting Independent Research Grant by the European Research Council. In 2016, he has been awarded a Consolidator Grant by the European Research Council. From 2017 to 2022, he will be board member (director) of the International Association for Cryptologic Research (IACR). He gave an invited talk at Eurocrypt 2019. His research interests include cryptographic hardware and embedded systems, low power implementations for constrained environments (RFIDs, sensor networks, ...), the design and cryptanalysis of symmetric cryptographic primitives, physical security issues in general and side-channel analysis in particular.
Alexandre Sablayrolles, Facebook, Paris, France
Tracing data through learning with watermarking?
Abstract:
Models trained with differential privacy (DP) provably limit information leakage, but the question remains open for non-DP models.
In this talk, we present multiple techniques for membership inference, which estimates if a given data sample is in the training set of a model.
In particular, we introduce a watermarking-based method that allows for a very fast verification of data usage in a model: this technique creates marks called radioactive that propagates from the data to the model during training.
This watermark is barely visible to the naked eye and allows data tracing even when the radioactive data represents only 1% of the training set.
Biography: Alexandre Sablayrolles is a Research Scientist at Facebook AI in Paris, working on the privacy and security of machine learning systems. He received his PhD from Université Grenoble Alpes in 2020, following a joint CIFRE program with Facebook AI. Prior to that, he completed his Master's degree in Data Science at NYU, and received a B.S. and M.S. in Applied Mathematics and Computer Science from École Polytechnique. Alexandre's research interests include privacy and security, computer vision, and applications of deep learning.