Robustness properties are a class of statistical properties that specify that perturbations in the input data should not significantly affect the result of a statistical method. In this presentation, we explore our work in verifying claims robust mean estimators in Coq, from the seminal work of Tukey in the 1960s showing robustness bounds for the trimmed mean algorithm, as well as ongoing work in verifying more recent claims of robustness. In the process, we have contributed new probability lemmas of the coq-infotheo library for information theory, and — in order to better handle the probability theory of more advanced algorithms — we present a new library for probability theory based on Lebesgue Measure theory that is being integrated into MathComp-analysis. This is a joint seminar with FM-SEC. Attendance via Zoom (ID: 919 9142 2762, Passcode: 245778) Livestream via Youtube
We present some research works over the past nine years on applying formal methods to secure cyber-physical systems (CPSs). The presentation will take place at a sufficiently high level of detail and will essentially be divided into three parts. In the first part, we highlight an hybrid process calculus to model both CPSs and physics-based attacks. We formalize a threat model that specifies MITM attacks that can manipulate sensor readings and/or control commands in order to drive a CPS into an undesired state, and we provide the means to assess attack tolerance/vulnerability with respect to a given attack. We then formalize how to estimate the impact of a successful attack on a CPS and investigate possible quantifications of the success chances of an attack. In the second part, we report a line of work that uses model checking tools and statistical model checking techniques to perform static security analysis of CPSs…
Confidential Computing (CC) using hardware-based Trusted Execution Environments (TEEs) has emerged as a promising solution for protecting sensitive data in all forms. One of the fundamental characteristics of such TEEs is remote attestation which provides mechanisms for securely measuring and reporting the state of the remote platform and computing environment to a user. We present a novel approach combining TEE-agnostic attestation architecture and formal analysis enabling comprehensive and rigorous security analysis of attestation mechanisms in CC. We demonstrate the application of our approach for three prominent industrial representatives, namely Arm Confidential Compute Architecture (CCA) in architecture lead solutions, Intel Trust Domain Extensions (TDX) in vendor solutions, and Secure CONtainer Environment (SCONE) in frameworks. For each of these solutions, we provide a comprehensive specification of all phases of the attestation mechanism in confidential computing, namely provisioning, initialization, and attestation protocol. Our approach reveals design and security issues in Intel TDX and…
The next-generation networks introduced higher flexibility and dynamicity in networking systems, but at the same time, they led to new threats and challenges. The traditional approach of a manual configuration of Network Security Functions (NSFs) such as firewalls and VPN gateways is not feasible anymore since it is not adequate for the ever-changing nature of modern networks and it is prone to human errors. To overcome this problem, the native flexibility provided by virtualization could be exploited to automate network security management. However, achieving a high level of automation while providing formal assurance that security management operations (e.g., configuration and orchestration) fulfill some security properties is still a complex research challenge. This presentation describes some novel approaches that combine automation, formal verification, and optimization for network security management. This is a joint seminar with FM-SEC. Attendance via Zoom (ID: 933 8257 2879, Passcode: 546836) Livestream via Youtube
Abstract: Reasoning about cryptographic protocols starts with a term algebra of communicated terms over which an appropriate logic is built, with variables designating terms. In logics of announcements, formulas are communicated, blurring the distinction between terms and formulas. Constructs such as zero knowledge proofs and certificates are akin to formulas, motivating a similar extension to reasoning about security protocols as well. However, the interaction between equality and the existential quantifier leads to interesting twists: witnesses for existential quantifiers may be unbounded, and obtaining small witness terms while maintaining equality proofs complicates the analysis considerably. In this talk we attempt to highlight the challenges in reasoning about insecurity when formulas are communicated.
This work is joint with Vaishnavi Sundararajan and S P Suresh.
In June 2022, French citizens abroad voted online during the French legislatives election to chose the new members of Parliament. In this work, we conducted a security analysis of the system under use. Due to a lack of system and threat model specifications, we first built and contributed such specifications by studying the French legal framework and by reverse-engineering the code base accessible to the voters. Our analysis reveals that this protocol is affected by two design-level and implementation-level vulnerabilities. We show how those allow a standard voting server attacker and even more so a channel attacker to defeat the election integrity and ballot privacy. We propose and discuss fixes to prevent those attacks. Our specifications, the attacks, and the fixes were acknowledged by the relevant stakeholders during our responsible disclosure. Beyond this specific protocol, we draw general conclusions and lessons from this instructive experience where an e-voting protocol meets the real-world constraints of a large-scale and political election.
Blackbox and Grey-box Approaches to Protocol State Machine Learning (With Lots of Attacks Against TLS and WPA)
Protocol state machine learning has been used to analyse many cryptographic protocols. Unlike fuzzing it can find logical flaws in protocols and unlike formal modelling it can find vulnerabilities in implementations. I will outline how black box state machine methods work, and describe how we have applied them to WPA to find two downgrade attacks. I will then describe a grey box learning method we have developed that uses memory snapshots and symbolic execution of the binary, combined with observations of run-time memory and a protocol’s inputs and outputs to learn its state machine. We show that this grey box method is much more efficient than black box learning, allowing us to test protocols in much more detail and leading to the discovery of new attacks against implementations of TLS and WPA.
In this talk I will present our works on e-exams. We have proposed a formal framework for e-exams. In this context we have been able to define several security properties. Some of these properties have been checked on existing protocols of the literature using automatic tools like Proverif.
Abstract: Symbolic protocol verification generally abstracts probabilities away, considering computations that succeed only with negligible probability, such as guessing random numbers or breaking an encryption scheme, as impossible. This abstraction, sometimes referred to as the perfect cryptography assumption, has shown very useful as it simplifies automation of the analysis. However, probabilities may also appear in the control flow where they are generally not negligible. In this paper we consider a framework for symbolic protocol analysis with a probabilistic choice operator: the probabilistic applied pi calculus. We define and explore the relationships between several behavioral equivalences. In particular we show the need for randomized schedulers and exhibit a counter-example to a result in a previous work that relied on non-randomized ones. As in other frameworks that mix both non-deterministic and probabilistic choices, schedulers may sometimes be unrealistically powerful. We therefore consider two subclasses of processes that avoid this problem. In particular,…
A little more conversation, a little less action, a lot more satisfaction: Global states in ProVerif
ProVerif is a popular tool for the fully automatic analysis of security protocols, offering very good support to detect flaws or prove security. One exception is the case of protocols with global states such as counters, tables, or more generally, memory cells. ProVerif fails to analyse such protocols, due to its internal abstraction.
In the original paper of GSVerif, our key idea was to devise a generic transformation of the security properties queried to ProVerif. We proved the soundness of our transformation and implemented it into a front-end GSVerif. Our experiments showed that our front-end (combined with ProVerif) outperforms the few existing tools, both in terms of efficiency and protocol coverage. We successfully applied our tool to a dozen of protocols of the literature, yielding the first fully automatic proof of a security API and a payment protocol of the literature.
In the newest version of ProVerif, the generic transformations of GSVerif can be fully described in term of axioms, thus taking advantage of this new ProVerif feature.