Tag Archives: machine learning security

Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, algorithms for private machine learning have been proposed. In this talk, we first show that training neural networks with rigorous privacy guarantees like differential privacy requires rethinking their architectures with the goals of privacy-preserving gradient descent in mind. Second, we explore how private aggregation surfaces the synergies between privacy and generalization in machine learning. Third, we present recent work towards a form of collaborative machine learning that is both privacy-preserving in the sense of differential privacy, and confidentiality-preserving in the sense of the cryptographic community. We motivate the need for this new approach by showing how existing paradigms like federated learning fail to preserve privacy in these settings.

Read more


  1          0

Despite their tangible impact on a wide range of real world applications, deep neural networks are known to be vulnerable to numerous attacks, including inference time attacks based on adversarial perturbations, as well as training time attacks such as backdoors. The security community has done extensive work in recent years to explore both attacks and defenses. In this talk, I will first discuss some of our projects at UChicago SAND Lab covering both sides of the struggle between attacks and defenses, including recent work on honeypot defenses (CCS 2020) and physical domain poison attacks (CVPR 2021). Unfortunately, our experiences in these projects has only reaffirmed the inevitable cat and mouse nature of attacks and defenses. Looking forward, I believe we must go beyond the current focus on attacking & defending single static DNN models, and to bring more pragmatic perspectives to improving robustness for deployed ML systems. To this end,…

Read more


  1          0

Advances in machine learning have led to rapid and widespread deployment of learning based inference and decision making for safety-critical applications, such as autonomous driving and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution. Recent work has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors in inference time through poisoning attacks. In this talk, I will describe my recent research about security and privacy problems in machine learning systems, with a focus on potential certifiably defense approaches via logic reasoning and domain knowledge integration with neural networks. We will also discuss other defense principles towards developing practical robust learning systems with robustness guarantees. Zoom meeting link: https://newcastleuniversity.zoom.us/j/81238177624?pwd=Nm16blNtakgwMmgrVVZpbmNCU2t5Zz09…

Read more


  1          0

Abstract: The security and privacy of ML-based systems are becoming increasingly difficult to understand and control, as subtle information-flow dependencies unintentionally introduced by the use of ML expose new attack surfaces in software. We will first present select case studies on data leakage and poisoning in NLP models that demonstrate this problem. We will then conclude by arguing that current defenses are insufficient, and that this calls for novel, interdisciplinary approaches that combine foundational tools of information security with algorithmic ML-based solutions.

We will discuss leakage in common implementations of nucleus sampling — a popular approach for generating text, used for applications such as text autocompletion. We show that the series of nucleus sizes produced by an autocompletion language model uniquely identifies its natural-language input. Unwittingly, common implementations leak nucleus sizes through a side channel, thus leaking what text was typed, and allowing an attacker to de-anonymize it.

Next, we will present data-poisoning attacks on language-processing models that must train on “open” corpora originating in many untrusted sources (e.g. Common Crawl). We will show how an attacker can modify training data to “change word meanings” in pretrained word embeddings thus controlling outputs of downstream task solvers (e.g. NER or word-to-word translation), or poison a neural code-autocompletion system, so that it starts making attacker-chosen insecure suggestions to programmers (e.g. to use insecure encryption modes). This code-autocompletion attack can even target specific developers or organizations, while leaving others unaffected.

Finally, we will briefly survey existing classes of defenses against such attacks, and explain that they are critically insufficient: they provide only partial protection, and real-world ML practitioners lack the tools to tell whether and how to deploy them. This calls for new approaches, guided by fundamental information-security principles, that analyze security of ML-based systems in an end-to-end fashion, and facilitate practicability of the existing defense arsenal.

Read more


  1          0

4/4