Nicholas Carlini (Google)

An Unreliable Foundation: Security & Privacy of Large Scale Machine Learning

Instead of training neural networks to solve any one particular task, it is now common to train neural networks to behave as a "foundation" upon which future models can be built. Because these models train on unlabeled and uncurated datasets, their objective functions are necessarily underspecified and not easily controlled.

In this talk I argue that while training underspecified models at scale may benefit accuracy, it comes at a cost to security and privacy. Compared to their supervised counterparts, large underspecified models are more easily attacked by adversaries. As evidence, I give three case studies where larger models are less reliable across three different problem setups. Addressing these challenges will require new solutions than those that have been studied in the past.

George Theodorakopoulos (Cardiff University)

Quantifying Location Privacy [Test of Time Award IEEE S&P 21]

We view location privacy as a statistical inference problem: The adversary makes noisy observations of the user's location and then tries to infer the actual location. The privacy metric is then the attacker's inference error. Modeling privacy in this way helps clarify and quantify assumptions about the adversary's background knowledge, and it helps compare various protection mechanisms. This talk will present this approach, which was published at Oakland 2011 and received the Test of Time Award at Oakland 2021, and it will explore subsequent results in this area.