Jean Michel Morel

Professor at Ecole Normale Supérieure de Cachan,

Centre de Mathématiques et Leurs Applications.

Reverse engineering: what can we learn from a digital image about its own history?

Abstract: I will review the algorithms able to analyse a digital image and able to retrieve part of its processing history.

This problem is relevant because more and more images happen to have lost their native EXIF metadata. I will describe several tools gathering information about an image's compression, resampling,  cropping, its gamma correction and its demosaicing process.  This information may be used to detect images manipulations and sometimes its tampering. 

A common denominator of all detection tools is that they need a false alarms control. I'll illustrate how a false  alarm rate can be rigorously associated to each detection.

Joint work with Quentin Bammey,  Miguel Colom, Thibaud Ehret, Rafael Grompone, Tina Nikoukhah

Short bio:  Jean-Michel Morel received the PhD degree in applied mathematics from University Pierre et Marie Curie, Paris, France in 1980. He started his career in 1979 as assistant professor in Marseille Luminy, then moved in 1984 to University Paris-Dauphine where he was promoted professor in 1992. He is Professor of Applied Mathematics at the Ecole Normale Supérieure Paris-Saclay since 1997.

His research is focused on the mathematical analysis of image processing. He has founded in 2011 Image Processing on Line (www.ipol.im), the first journal publishing reproducible algorithms, software and online executable articles. He is a laureate of 2013 Grand Prix INRIA - Académie des Sciences, 2015 CNRS médaille de l'innovation, and 2015 IEEE Longuet-Higgins prize. He is 2017 Doctor honoris causa of Universidad de la Republica, Montevideo.


Reza Shokri

Assistant Professor at National University of Singapore (NUS)

Computer Science Department

Trusting Machine Learning: Privacy, Robustness, and Transparency Challenges

Abstract: Machine learning algorithms have shown an unprecedented predictive power for many complex learning tasks. As they are increasingly being deployed in large scale critical applications for processing various types of data, new questions related to their trustworthiness would arise. Can machine learning algorithms be trusted to have access to individuals' sensitive data? Can they be robust against noisy or adversarially perturbed data? Can we reliably interpret their learning process, and explain their predictions? In this talk, I will go over the challenges of building trustworthy machine learning algorithms in centralized and distributed (federated) settings, and will discuss the inter-relation between privacy, robustness, and interpretability.

Short bio: Reza shokri is an Assistant Professor of Computer Science at the National University of Singapore (NUS), where he holds the NUS Presidential Young Professorship. His research is on adversarial and privacy-preserving computation, notably for machine learning algorithms. He is an active member of the security and privacy community, and has served as a PC member of IEEE S&P, ACM CCS, Usenix Security, NDSS, and PETS. He received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies in 2018, for his work on analyzing the privacy risks of machine learning models, and was a runner-up in 2012, for his work on quantifying location privacy. He obtained his PhD from EPFL. More information: https://www.comp.nus.edu.sg/~reza/