Loading Seminars

« All Seminars

  • This seminar has passed.

An Unreliable Foundation: Security & Privacy of Large Scale Machine Learning

September 22 (2021) @ 3:00 pm - 4:00 pm

Instead of training neural networks to solve any one particular task, it is now common to train neural networks to behave as a “foundation” upon which future models can be built. Because these models train on unlabeled and uncurated datasets, their objective functions are necessarily underspecified and not easily controlled.

In this talk I argue that while training underspecified models at scale may benefit accuracy, it comes at a cost to security and privacy. Compared to their supervised counterparts, large underspecified models are more easily attacked by adversaries. As evidence, I give three case studies where larger models are less reliable across three different problem setups. Addressing these challenges will require new solutions than those that have been studied in the past.

Zoom meeting: https://newcastleuniversity.zoom.us/j/87852741099?pwd=c1FhRk5EdkJVa0RzbkUxalhZbURLQT09
Meeting ID: 878 5274 1099
Passcode: 782722

Youtube live stream: https://youtu.be/FRWdpVLTmmw

Details

Date:
September 22 (2021)
Time:
3:00 pm - 4:00 pm

Presenter

Nicholas Carlini (Google)

Nicholas Carlini is a research scientist at Google Brain. He studies the security and privacy of machine learning, for which he has received best paper awards at ICML, USENIX Security and IEEE S&P. He obtained his PhD from the University of California, Berkeley in 2018.

1 comment

Leave a Reply

Your email address will not be published. Required fields are marked *