Calendar of Seminars
M Mon
T Tue
W Wed
T Thu
F Fri
S Sat
S Sun
0 seminars,
0 seminars,
1 seminar,
Learning from the People: Responsibly Encouraging Adoption of Contact Tracing Apps
Learning from the People: Responsibly Encouraging Adoption of Contact Tracing Apps
While significant focus was put on developing privacy protocols for these apps, relatively less attention was given to understanding why, and why not, users might adopt them. Yet, for these technological solutions to benefit public health, users must be willing to adopt these apps. In this talk I showcase the value of taking a descriptive ethics approach to setting best practices in this new domain. Descriptive ethics, introduced by the field of moral philosophy, determines best practices by learning directly from the user -- observing people’s preferences and inferring best practice from that behavior -- instead of exclusively relying on experts' normative decisions. This talk presents an empirically-validated framework of user's decision inputs to adopt COVID19 contact tracing apps, including app accuracy, privacy, benefits, and mobile costs. Using predictive models of users' likelihood to install COVID apps based on quantifications of these factors, I show how high the bar is for achieving adoption. I conclude by discussing a large-scale field study in which we put our survey and experimental results into practice to help the state of Louisiana advertise their COVID app through a series of randomized controlled Google Ads experiments.
0 seminars,
0 seminars,
0 seminars,
0 seminars,
0 seminars,
0 seminars,
1 seminar,
Evolving Perspectives on Adversarial Robustness for Deep Neural Networks
Evolving Perspectives on Adversarial Robustness for Deep Neural Networks
Despite their tangible impact on a wide range of real world applications, deep neural networks are known to be vulnerable to numerous attacks, including inference time attacks based on adversarial perturbations, as well as training time attacks such as backdoors. The security community has done extensive work in recent years to explore both attacks and defenses. In this talk, I will first discuss some of our projects at UChicago SAND Lab covering both sides of the struggle between attacks and defenses, including recent work on honeypot defenses (CCS 2020) and physical domain poison attacks (CVPR 2021). Unfortunately, our experiences in these projects has only reaffirmed the inevitable cat and mouse nature of attacks and defenses. Looking forward, I believe we must go beyond the current focus on attacking & defending single static DNN models, and to bring more pragmatic perspectives to improving robustness for deployed ML systems. To this end,…
0 seminars,
0 seminars,
0 seminars,
0 seminars,
0 seminars,
0 seminars,
1 seminar,
Is Differential Privacy a Silver Bullet for Machine Learning?
Is Differential Privacy a Silver Bullet for Machine Learning?
Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, algorithms for private machine learning have been proposed. In this talk, we first show that training neural networks with rigorous privacy guarantees like differential privacy requires rethinking their architectures with the goals of privacy-preserving gradient descent in mind. Second, we explore how private aggregation surfaces the synergies between privacy and generalization in machine learning. Third, we present recent work towards a form of collaborative machine learning that is both privacy-preserving in the sense of differential privacy, and confidentiality-preserving in the sense of the cryptographic community. We motivate the need for this new approach by showing how existing paradigms like federated learning fail to preserve privacy in these settings.
0 seminars,
0 seminars,
0 seminars,
0 seminars,
0 seminars,
0 seminars,
1 seminar,
Failing Content Security Policy? Learning from its past to improve its future
Failing Content Security Policy? Learning from its past to improve its future
Content Security Policy has been around for 10 years and still only a fraction of sites on the Web leverage its full potential to mitigate XSS and other flaws. In this talk, we will analyze the evolution of CSP over time and how sites could leverage it to secure against three attacks classes. This is based on our NDSS 2020 paper (https://swag.cispa.saarland/papers/roth2020csp.pdf), which sheds light on the usage of CSP on 10,000 sites over a period of six years. Furthermore, we discuss insights on technical roadblocks of CSP (NDSS 2021, https://swag.cispa.saarland/papers/steffens2021blockparty.pdf), which shows that CSP's success is in large parts blocked by third parties. Finally, we will discuss our most recent work on (un)usability aspects and fundamental roadblocks for developers (CCS 2021, https://swag.cispa.saarland/papers/roth2021usable.pdf).
0 seminars,
0 seminars,
0 seminars,
0 seminars,
0 seminars,
0 seminars,
1 seminar,
Security and Privacy in Data Science
Security and Privacy in Data Science
Data science is the process from collection of data to the use of new insights gained from this data. It is at the core of the big data and machine learning revolution fueling the digitization of our economy. The integration of data science and machine learning into digital and cyber-physical processes and the often sensitive nature of personally identifiable data used in the process, expose the data science process to security and privacy threats. In this talk I will review three exemplary security and privacy problems in different phases of the data science lifecycle and show potential countermeasures. First, I will show how to enhance the privacy of data collection using secure multi-party computation and differential privacy. Second, I will show how to protect data outsourced to a cloud database system and still perform efficient queries using keyword PIR and homomorphic encryption. Last, I will show that differential privacy does…