Thibault Gauthier (Czech Technical University)
Place: A008
Title: The Automation of Mathematics in Practice
Additionally, we present our progress towards improving automation in areas that are usually considered challenging for proof automation, such as conjecture generation and inductive reasoning.
Julie Cailler (VERIDIS)
Place: A008
Title: SC-TPTP : étendre le format TPTP pour les preuves basées sur les séquents
Julie Cailler (University of Regensburg)
Date: Thursday, March 14th 2024 – 13h00
Place: C005, visio
Title: Conception d’un prouveur automatique basé sur les tableaux analytiques et production de preuves vérifiables
La déduction automatique est l’utilisation de programmes informatique afin d’automatiquement prouver des théorèmes mathématiques. Elle trouve son intérêt dans la détection de bogues au sein de systèmes critiques ou encore dans l’aide à la démonstration de preuves mathématiques. Cet exposé se concentre sur la présentation de Goéland, un prouveur de théorèmes automatique concurrent, et sur les principaux défis qu’il rencontre. Ces défis incluent l’implémentation d’une procédure de recherche de preuves équitable basée sur la méthode des tableaux en logique du premier ordre, la prise en compte du raisonnement au sein des certaines théories (l’égalité, la théorie des ensembles, etc), ainsi que la génération de preuves vérifiables par des outils externes (Coq, Lambdapi).
Jawher Jerray (Télécom Paris)
Date: Tuesday, March 12th 2024 – 13h00
Place: B013
Title: Model-based High-level Integration of Heterogeneous Components for co-simulation
Because of their complexity, embedded systems are designed with sub-systems or components taken in charge by different development teams or entities and with different modeling frameworks and simulation tools, depending on the characteristics of each component. Unfortunately, this diversity of tools and semantics makes the integration of these heterogeneous components difficult. Thus, to evaluate their integration before their hardware or software is available, one solution would be to merge them into a common modeling framework. Yet, such a holistic environment supporting many computation and computation semantics seems hard to settle. Another solution we investigate is to generically link their respective simulation environments in order to keep the strength and semantics of each component environment. We present a method to simulate heterogeneous components of embedded systems in real-time. These components can be described at any abstraction level. Our main contribution is a generic glue that can analyze in real-time the state of different simulation environments and accordingly enforce the correct communication semantics between components.
Claude Stolze (ENS Saclay)
Date: Tuesday, March 5th 2024 – 13h00
Place: B013
Title: Composable Partial Multiparty Session Types
Because of their complexity, embedded systems are designed with sub-systems or components taken in charge by different development teams or entities and with different modeling frameworks and simulation tools, depending on the characteristics of each component. Unfortunately, this diversity of tools and semantics makes the integration of these heterogeneous components difficult. Thus, to evaluate their integration before their hardware or software is available, one solution would be to merge them into a common modeling framework. Yet, such a holistic environment supporting many computation and computation semantics seems hard to settle. Another solution we investigate is to generically link their respective simulation environments in order to keep the strength and semantics of each component environment. We present a method to simulate heterogeneous components of embedded systems in real-time. These components can be described at any abstraction level. Our main contribution is a generic glue that can analyze in real-time the state of different simulation environments and accordingly enforce the correct communication semantics between components.
Laetitia Laversa (Université Sorbonne Paris Nord, LIPN)
Date: Monday, February 19th 2024 – 13h00
Place: B013 (Bob)
Title: Communicating automata and k-synchronizability
Distributed systems are ubiquitous and their implementation is complex and error-prone. In order to check for errors, they can be modeled as systems of communicating automata, where each automaton represents the behavior of an element of the system. In general, verification problems are undecidable in such a model. The use of (under or over) approximations is necessary. This talk presents k-synchronizable systems, which are a subclass of systems of communicating automata, and some results on them.
Geoff Sutcliffe (University of Miami)
Date: Friday, June 23rd 2023 – 14h00
Place: C103
Title: The TPTP World – Infrastructure for Automated Reasoning
The TPTP World is a well known and established infrastructure that supports research, development, and deployment of Automated Theorem Proving (ATP) systems for classical logics. The data, standards, and services provided by the TPTP World have made it increasingly easy to build, test, and apply ATP technology. This talk and tutorial reviews the core features of the TPTP World, describes key service components of the TPTP World and how to use them, and presents some successful applications.
Bernadette Charron-Bost (Département Informatique de l’ENS)
Date: Tuesday, May 23rd 2023 – 14h00
Place: C103
Title: Computable Functions in Anonymous Networks
In this talk, we present several computability results in anonymous networks with broadcast communications. First, we recall the characterization, given by Boldi and Vigna, of the computable functions when agents have no information on their outgoing neighborhoods. Then we characterize the class of computable functions in networks with either (a) output port awareness, (b) bidirectional links, or (c) outdegree awareness: In each case, we prove that a function can be computed if and only it is frequency-based, namely, its value only depends on the frequencies of the different input values. This characterization holds for both exact and approximate computability. Our approach relies on the notion of graph fibration, and the key to our positive result is a distributed algorithm that computes the minimum base of the network.
In the second part of the talk, we tackle the setting of dynamic networks and present a quite different approach based on the popular Push-Sum algorithm: When an upper bound on the network size is available, we provide a more tractable algorithm for computing frequency-based functions, which can cope with dynamic network topology changes.
Jawher Jerray (Télécom Paris)
Date: Wednesday, May 3rd 2023 – 14h00
Place: B013
Title: Guaranteed properties of dynamical systems under perturbations
Since dynamical systems has a major impact on human development, especially critical systems that can put human lives at risk if something goes wrong. Hence, the need of studying the behavior of these systems in order to guarantee their correct functioning. Nevertheless, computing such type of system has never been an easy task, as the complexity of these systems is constantly increasing, in addition to the perturbations that may arise during their performance, as well as undefined parameters that may exist. To ensure that a system always produces the expected results and does not fail in any way, a formal verification of its behavior and properties is necessary. In this work, we study the schedulability of the flight control of a space launcher with unknown parameters and under constraints. Then, we propose a synthesis of the admissible timing values of the unknown parameters by a parametric timed model checker. We increase the complexity of the problem by taking into consideration the switch time between two threads. We extend this work by developing a tool that translates a given real-time system design into parametric timed automata in order to infer some timing constraints ensuring schedulability.
Johannes Mueller (Université du Luxembourg)
Date: Wednesday, April 5 2023 – 10h00
Place: B013
Title: Towards End-to-End Formal Verification of Voting Protocols
Arnaud Spiwack (Tweag)
Date: Thursday, May 12 2022 – 11h00
Place: B013
Title: Linear Haskell
Since 2016, I’ve been leading the effort to supplement the
functional programming language Haskell with linear typing (in
the sense of linear logic). That is you can write a type of
functions which are allowed to use their arguments just once. The
first iteration of this was released as part of GHC 9.0.
This may seem like a curious property to require of a
function. Originally, linear logic was motivated by
proof-theoretic consideration. At first it appeared as a natural
decomposition of the coherence-space models of classical logic,
but it does have far reaching proof-theoretical
considerations. I’m one to take the connection between proof
theory and programming languages (the Curry-Howard
correspondence) quite seriously. Linear logic has almost
immediately been seen, from a programming language standpoint, as
giving a way to model resources in types. But what this
concretely means is not super clear. In this talk I will describe
the sort of practical benefits that we expect from linear types
in Haskell today. They are, in particular, related to Rust’s
ownership typing, though I don’t know whether I will have time to
explain this in detail. At any rate, I am not going to spoil the
entire talk here. I’d do a bad job of it in these handful of
lines, anyway.
Souheib BAARIR (Sorbonne Université – LIP6)
Date: Friday, April 29 2022 – 13h00
Place:
Title: Contributions to the analysis of discrete systems through proportional satisfiability
Despite its NP-Completeness, propositional satisfiability
(SAT) covers a broad spectrum of applications. Nowadays, it is an
active research area finding its applications in many contexts:
planning decision, hardware and software verification, cryptology,
computational biology, etc. Hence, the development of approaches that
could handle increasingly challenging SAT problems has become a
focus. In this presentation, I will discuss two directions that
contribute to the improvement of SAT solving: namely, the exploitation
of symmetries and parallelism.
Alexandre Debant (Inria Nancy – Grand Est)
Date: Tuesday, April 6 2021 – 14h00
Place:
Title: Formal verification of security protocols – how to prove the physical proximity of the participants?
Security protocols are short distributed programs designed to achieve various security goals such as privacy and authenticity, even if the communications between participants occur on compromised or insecure channels, like the Internet. To achieve these goals, protocols usually rely on cryptographic primitives, like encryption, signature or hash functions, and tools have been developed to automatically prove their security.
Unfortunately, with the rise of the RFID or NFC technologies, new security protocols have been designed to ensure new security requirements, and existing tools do not apply to prove their security.
For instance, regarding the contactless payment protocols or the keyless systems, a new security goal is to ensure the physical proximity of the card to the reader to avoid relay attacks. To do so, the protocols, called distance-bounding protocols, rely on physical properties, like the transmission delay or the network topology, which are usually ignored in existing models and tools.
In this talk I will present a framework which makes the automatic analysis of these protocols possible.
Luigi Liquori (Inria Sophia-Antipolis)
Date: Wednesday, February 5 2020 – 11h00
Place: B013
Title: Why reductions have to be synchronised in intersection (and union) typed lambda-calculi
We present the ∆-calculus, an explicitly typed λ-calculus with strong pairs, projections and explicit type coercions.
The calculus can be parametrized with different intersection type theories, as described in the Barendregt-Dekker-Statman book on λ-calculi with types, producing a family of ∆-calculi with related intersection typed systems. We show why annotating pure λ-calculus with intersection types is not easy: a classical example is the difficulty to decorate the bound variable of the explicitly typed polymorphic identity λx:?.x such that the type of the identity is (σ → σ) ∩ (τ → τ ): previous attempts showed that the full power of the intersection type discipline can be easily lost. We show why intersection typed systems need a kind of synchronised reduction to fix the subject reduction theorem. The same problematics also appear when decorating λ-calculus with union-types. Finally, we show how the ∆-calculus can be raised to a ∆-framework by adding dependent-types as in the Edinburgh Logical Framework.
Raphaëlle Crubillé
Date: Wednesday, January 22 2020 – 11h00
Place: C005
Title: On the Versatility of Open Logical Relations: Continuity,
Automatic Differentiation, and a Containment Theorem
I will present a joint work with Barthe, Dal Lago and Gavazzo.
Logical relations are one of the most powerful techniques in the theory of programming languages, and have been used extensively for proving properties of a variety of higher-order calculi. However, there are properties that cannot be immediately proved by means of logical relations, for instance program continuity and differentiability in higher-order languages extended with real-valued functions. Informally, the problem stems from the fact that these properties are naturally expressed on terms of non-ground type (or, equivalently, on open terms of base type), and there is no apparent good definition for a base case (i.e. for closed terms of ground types). To overcome this issue, we introduce a generalization of the concept of a logical relation, which we dub open logical relation, and prove that it can be fruitfully applied in several contexts in which the property of interest is about expressions of first-order type. Our setting is a simply-typed λ-calculus enriched with real numbers and real-valued first-order functions from a given set, such as the one of continuous or differentiable functions. We first prove a containment theorem stating that for any such a collection of functions including projection functions and closed under function composition, any well-typed term of first-order type denotes a function belonging to that collection. Then, we show by way of open logical relations the correctness of the core of a recently published algorithm for forward automatic differentiation. Finally, we define a refinement-based type system for local continuity in an extension of our calculus with conditionals, and prove the soundness of the type system using open logical relations.
Sophie Tourret (MPI-Inf Saarbrücken)
Date: Tuesday, December 3 2019 – 10h30
Place: A006
Title: Stronger Higher-order Automation
Automated reasoning in first-order logic (FOL) is becoming a mature research domain. It has led to the development of powerful tools such as superposition-based theorem provers and SMT solvers (Satisfiability Modulo Theory solvers), that have found and continue to find many applications in industry and research.
One such application is the automation of proofs in interactive theorem proving (ITP), where proof assistants are used to write computer-checked proofs of theorems, generally expressed in a variant of higher-order logic (HOL). This automation is realised via “hammers”, that attempt to delegate the proof obligations to first-order automated provers. However, in the translation from HOL to FOL, hammers obfuscate the structure of terms and formulas through the application of a sequence of encodings, although it is this very structure that the provers exploit to find proofs efficiently.
This situation is less than ideal, and if until a few years ago, the ITP and ATP communities were working in parallel, mostly ignoring each other, there is nowadays a trend pushing to bring the two communities closer. The work that I will present in this talk is part of this trend. It includes ongoing research that is either improving higher-order automation with respect to ITP applications or using ITP as a vehicle for ATP research.
Fabian Reiter (LSV, Cachan)
Date: Friday, October 19 2018 – 14h00
Place: B013
Title: Descriptive distributed complexity
This talk connects two classical areas of theoretical computer science: descriptive complexity and distributed computing. The former is a branch of computational complexity theory that characterizes complexity classes in terms of equivalent logical formalisms. The latter studies algorithms that run in networks of interconnected processors.
Although an active field of research since the late 1970s, distributed computing is still lacking the analogue of a complexity theory. One reason for this may be the large number of distinct models of distributed computation, which make it rather difficult to develop a unified formal framework. In my talk, I will outline how the descriptive approach, i.e., connections to logic, could be helpful in this regard.
Sergueï Lenglet (Loria, UL)
Date: Tuesday, October 16 2018 – 10h00
Place: A008
Title: HOpi in Coq : Locally Nameless vs Nominal
I will present what I did during my sabbatical in Rennes : the formalization of a higher-order process calculus, i.e., a process algebra where messages contain executable processes. I will discuss in particular the problem of representing binders, and compare existing techniques (locally nameless and nominal) in that setting.
Cristóbal Rojas (Universidad Andres Bello, Santiago, Chile)
Date: Tuesday, September 18 2018 – 14h00
Place: C103
Title: Computability and complexity of Julia sets
Julia sets of quadratic maps of the form z²+c for complex parameters c are among the most well-studied objects in dynamical systems. In particular, countless computer programs have been designed to visualize their structure. In 2006, Braverman and Yampolsky showed in a surprising result that there exist efficiently computable parameters c for which the Julia set cannot be computed by any algorithm. In this talk we will introduce the basic dynamical properties of quadratic maps of the complex plane and present several results on computability and computational complexity of their Julia sets. No previous knowledge in dynamical systems will be assumed.
Cezary Kaliszyk (University of Innsbruck)
Date: Tuesday, June 12 2018 – 15h00
Place: C103
Title: Typed set theory as an Isabelle object logic
One of the main goals of the Mizar project has been to create a formal system that would be attractive for mathematicians. As such it differs in many aspects from other all other systems, including its foundations, the type system or the input language. In this talk we present a combination of Isabelle – a modern logical framework – with a Mizar object logic and argue that it can serve as an attractive environment for formal mathematics. We will first specify the Mizar foundations, which are a variant of set theory extended with a type system corresponding to how mathematicians classify objects. We will discuss mathematical structures and their inheritance, including multiple inheritance. Finally we will compare the proof styles of Mizar and Isabelle to natural proofs.
Fahad R. Golra (LORIA & UL)
Date: Tuesday, April 10 2018 – 14h00
Place: B013
Title: Bridging the gap between formal and informal models in software and system development projects
Formal methods serve as a backbone for the software and system development projects that require high level of accuracy. However, the client perspective of the future system is presented in an informal requirements document. This gap between formal and informal approaches becomes the weakest link of the chain for such development projects. The goal is to bridge this gap through a fine-grained level of traceability between the client-side informal requirements document to the developer-side formal
specifications using a semi-formal modeling technique, model federation. This fine-grained level of traceability can be exploited by the requirements engineering process for performing different actions that involve either or both these informal and formal artifacts. In this talk, I would like to present the model
federation approach, and its application for linking textual requirements with Event-B machines. I will also present some future perspectives concerning the links between informal and formal approaches of software development.
Keywords: System specification, Requirements modeling, Model federation, Traceability, Requirements refinement
Thomas Chatain (LSV & ENS Cachan)
Date: Monday, March 19 2018 – 14h00
Place: A008
Title: Variations on alignments in process mining
Conformance checking techniques asses the suitability of a process model in representing an underlying process, observed through a collection of real executions. One important metric in conformance checking is to asses the precision of the model with respect to the observed executions, i.e., characterize the ability of the model to produce behavior unrelated to the one observed. By avoiding the computation of the full state space of a model, current techniques only provide estimations of the precision metric, which in some situations tend to be very optimistic, thus hiding real problems a process model may have. In this paper we present the notion of anti-alignment as a concept to help unveiling traces in the model that may deviate significantly from the observed behavior. Using anti-alignments, current estimations can be improved, e.g., in precision checking. We show how to express the problem of finding anti-alignments as the satisfiability of a Boolean formula, and provide a tool which can deal with large models efficiently.
Arthur Charguéraud (Inria & ICube, Université de Strasbourg)
Date: Tuesday, February 20 2018 – 15h00
Place: A008
Title: Verification of Imperative Programs using CFML
CFML is an interactive tool that supports modular verification of higher-order, imperative programs. It is entirely implemented inside the Coq proof assistant. CFML leverages two key ingredients. First, it relies on an embedding in Separation Logic to describe the mutable state. Second, it relies on the construction of Characteristic Formulae for smoothly integrating the process of verifying in Coq the code line by line. CFML has been applied to verify classical data structures and algorithms, including binary search trees, hashtables, finger trees, catenable and splittable chunk sequences, Dijkstra’s shortest path algorithm, depth first search, vectors, Eratosthenes’ sieve, and Union-Find. Furthermore, CFML has been recently extended to support amortized complexity analysis.
In this talk, I will give a tour of CFML, covering both its implementation and its use in practice. Experience with Coq is not required to follow the talk.
Martin Schüle (Faculté des sciences appliquées de Zürich)
Date: Thursday, December 7 2017 – 14h00
Place: B013
Title: Elementary Cellular Automata and the Nature of Computation
Computational notions abound in science. Especially in the fields of biology and neuroscience computational capacities are often ascribed to entities and processes. Yet it is not clear what kind of “computation” is at work there. Can it still be treated within traditional computational theory or is some other more general or vague notion of computation needed? An answer is formulated in the context of cellular automata theory: it is shown that elementary cellular automata fulfill certain dynamical system properties in order to show computational capacities. These findings are then generalized to a generic notion of natural computation.
Ross Duncan (Univ. of Strathclyde, Glasgow)
Date: Tuesday, June 13 2017 – 14h00
Place: A008
Title: Verifying Quantum Computations, Graphically.
Quantum computers are coming soon, and with them software whose functioning depends on the principles of quantum mechanics. Unfortunately, the “logic” which underlies quantum mechanics is very different to the logics usually employed by computer scientists for the verification of software. How then can we verify the correctness of quantum software? In this talk I will present the ZX-calculus, a formal graphical language based on the algebraic structure of quantum theory, and demonstrate how it can be used to verify quantum computations with the help of an automated rewrite system.
Vijay Ganesh (Univ. of Waterloo, Canada)
Date: Thursday, April 20 2017 – 14h00
Place: A006
Title: On The Unreasonable Effectiveness of Boolean SAT Solvers.
Modern conflict-driven clause-learning (CDCL) Boolean SAT solvers routinely solve very large industrial SAT instances in relatively short periods of time. This phenomenon has stumped both theoreticians and practitioners since Boolean satisfiability is an NP-complete problem widely believed to be intractable. It is clear that these solvers somehow exploit the structure of real-world instances. However, to-date there have been few results that precisely characterize this structure or shed any light on why these SAT solvers are so efficient. In this talk, I will present results that provide a deeper empirical understanding of why CDCL SAT solvers are so efficient, which may eventually lead to a complexity-theoretic result. Our results can be divided into two parts. First, I will talk about structural parameters that can characterize industrial instances and shed light on why they are easier to solve even though they may contain millions of variables compared to small crafted instances with hundreds of variables. Second, I will talk about internals of CDCL SAT solvers, and describe why they are particularly suited to solve industrial instances. Brief Bio: Dr. Vijay Ganesh is an assistant professor at the University of Waterloo since 2012. Prior to that he was a research scientist at MIT, and completed his PhD in computer science from Stanford University in 2007. Vijay’s primary area of research is the theory and practice of automated reasoning aimed at software engineering, formal methods, security, and mathematics. In this context he has led the development of many SAT/SMT solvers, most notably, STP, The Z3 string solver, MapleSAT, and MathCheck. He has also proved several decidability and complexity results relating to the SATisfiability problem for various mathematical theories. For his research, he has won over 21 awards including an ACM Test of Time Award at CCS 2016, two Google Faculty Research Awards in 2011 and 2013, and a Ten-Year Most Influential Paper Award at DATE 2008.
Amaury Pouly (MPI-SWS, Saarbrücken)
Date: Thursday, March 2 2017 – 14h00
Place: A006
Title: Solvability of Matrix-Exponential Equations.
We consider the problem of checking whether a switching system can reach a particular state. We show that even this problem is undecidable but decidable in some restricted cases. A switching system is a particular form of hybrid system very commonly used to model electrical systems with several states. It consists of a finite number of states and in each state, the system evolves according to a linear differential equation. This type of model is also used in modelisation to over-approximate the possible behaviours of a physical system when the transition rules between states are unclear. Our result shows a surprising connection with a number of theorems from algebraic and transcendental number theory, and Hilbert’s Tenth Problem. One possible interpretation of this work is that switching systems, despite their innocent look, are already too powerful a model of computation.
Mahsa Shirmohammadi (Oxford Univ.)
Date: Tuesday, January 17 2017 – 14h00
Place: A008
Title: Minimal probabilistic automata have to make irrational choices.
In this talk, we answer the question of whether, given a probabilistic automaton (PA) with rational transition probabilities, there always exists a minimal equivalent PA that also has rational transition probabilities. The PA and its minimal equivalent PA accept all finite words with equal probabilities. We approach this problem by exhibiting a connection with a longstanding open question concerning nonnegative matrix factorization (NMF). NMF is the problem of decomposing a given nonnegative n×m matrix M into a product of a nonnegative n×d matrix W and a nonnegative d×m matrix H. In 1993 Cohen and Rothblum posed the problem of whether a rational matrix M always has an NMF of minimal inner dimension d whose factors W and H are also rational. We answer this question negatively, by exhibiting a matrix for which W and H require irrational entries. As a corollary we obtain a negative answer to the question on rationality of minimal probabilistic automata. This talk is based on two papers in ICALP 2016 and SODA 2017.
Sophie Tourret (NII Tokyo)
Date: Thursday, January 12 2017 – 14h00
Place: B011
Title: Towards SAT Solving in Batches.
In this presentation, I will talk about my ongoing work on how to perform SAT solving by batches, i.e., by making multiple decisions at once in the CDCL loop. There are theoretical obstacles to this because the admissible strategy of CDCL assumes that conflict resolution and unit propagation have a higher priority than decisions and in the current CDCL framework if this strategy is not followed, a deadlock can occur where no rule can be applied and the satisfiability of the input formula is still unknown. A new formalism will be presented to solve this problem, that is inspired by results from [[1,2]. The considered method to verify this claim (Isabelle) and the practical context motivating this approach (SAT solving on GPU) will also be introduced. [[1] Weidenbach, C. Automated reasoning building blocks Correct System Design, Springer, 2015, 172-188 [[2] Blanchette, J. C., Fleury, M. & Weidenbach, C. A Verified SAT Solver Framework with Learn, Forget, Restart, and Incrementality, IJCAR, 2016
Martin Brain (Oxford Univ.)
Date: Tuesday, October 4 – 14h00
Place: A006
Title: Bit-Exact Automated Reasoning About Floating-Point Arithmetic.
The majority of new instrumentation and control systems, even those that are safety critical, make use of floating-point numbers. Thus the value of computer-generated assurance evidence (including verification, test coverage, etc.) depends on the correctness, completeness and efficiency of automated reasoning about floating-point expressions. In this talk we will review the SMT-LIB Theory of Floating-Point, its semantics and the rationale behind key design decisions as well as surveying the current state-of-the-art in solver technology and future research directions. We aim to provide system integrators with sufficient information to integrate floating-point support into SMT interfaces and solver developer enough ideas to work on the next generation of floating-point reasoning.
Yannick Chevalier (Univ. Toulouse III)
Date: Wednesday, March 23 – 14h00
Place: C103
Title: Vers la synthèse de composants sécurisés.
Lors d’Usenix Enigma 2016, le directeur de l’équipe d'”accès sur mesure” de la NSA a fait une liste des failles privilégiées permettant d’entrer dans un système. En dehors de l’ingénierie sociale, le point d’entrée privilégié dans les systèmes informatiques est dans l’écart qui existe entre les composants (protocoles, logiciels,…) que les administrateurs systèmes ont voulu déployer et les logiciels réellement existants. Cette différence vient de la perpétuation de composants obsolètes, de la non-application de patches, mais aussi de composants n’implémentant pas fidèlement leur spécification. Cet exposé portera sur ce dernier point, et sur les travaux ouvrant la voie vers la synthèse sécurisée de composants communiquants, parmi lesquels des protocoles cryptographiques ou des services et applications Web. Cet exposé portera dans un premier temps sur la définition d’une sémantique opérationnelle à partir d’une déclaration d’échanges de messages (travail commun avec Michaël Rusinowitch), et dans un second temps de techniques de synthèse de service lorsqu’il est fait abstraction des messages (travail commun avec MR et Walid Belkhir). Les travaux futurs consisteront à combiner ces deux approches pour générer automatiquement des implémentations de protocoles et de services à partir de composants sécurisés déjà prouvés.
Sylvain Salvati (Inria Bordeaux, LaBRI)
Date: Tuesday, March 22 – 14h00
Place: C103
Title: Behavioral verification of higher-order programs.
Higher-order constructions make their way into main stream programming languages like Java, C++, python, rust… These constructions bring new challenges to the verification of programs as they make their control flow more complex. In this talk, I will present how methods coming from denotational semantics can prove decidable the verification of certain properties of higher-order programs. These properties are expressed by means of finite state automata of the possibly infinite execution trees generated by the programs and can capture safety properties but also liveness and fairness properties.
Nicolas Ollinger (Univ. Orléans, LIFO)
Date: Monday, March 21 – 14h00
Place: C103
Title: The transitivity problem of Turing machines.
A Turing machine is topologically transitive if every partial configuration — that is a state, a head position, plus a finite portion of the tape — can reach any other partial configuration, provided that they are completed into proper configurations. In this talk, we study the computational complexity of transitivity combining a reduction technique called embedding that preserves dynamical properties with a remarkable small minimal aperiodic Turing machine called SMART. We further study the computational complexity of minimality, the property of every configuration reaching every partial configuration. This is joint work with Julien Cassaigne from I2M in Marseille, Anahí Gajardo and Rodrigo Torres-Avilés from the University of Concepción, Chile.
Xavier Urbain (ENSIIE)
Place: C103
Title: Preuve formelle pour essaims de robots : rassemblement dans le plan réel.
Les réseaux de robots mobiles reçoivent depuis quelques années une attention croissante de la part de la communauté de l’algorithmique distribuée. L’analyse de la faisabilité de certaines tâches en coopération par des essaims de robots autonomes est toutefois extrêmement ardue ; le modèle formel Pactole a été conçu pour aider à la vérification formelle mécanique (avec coq) de ce modèle émergent. Il permet à la fois de certifier des résultats d’impossibilité et de prouver la correction de protocoles distribués. À titre d’illustration je présenterai dans cet exposé un algorithme original universel pour le rassemblement de robots oublieux dans le plan réel. Cet algorithme est prouvé correct grâce à notre développement formel.
Gwen Salaün (Ensimag)
Date: Tuesday, March 15 – 14h00
Place: A006
Title: Automated Verification of Asynchronously Communicating Systems.
Recent software is mostly constructed by reusing and composing existing components abstracted as finite state machines. Asynchronous communication is a classic interaction mechanism used for such software systems. However, analyzing communicating systems interacting asynchronously via FIFO buffers is an undecidable problem. A typical approach is to make finite the corresponding state space by limiting the presence of communication cycles in behavioral models or by fixing buffer sizes. In this talk, our focus is on systems that are likely to be unbounded and therefore result in infinite systems. We do not want to restrict the system by imposing any arbitrary bounds. We first present the synchronizability property and then introduce a notion of stability. We prove that once the system is stable for a specific buffer bound, it remains stable whatever larger bounds are chosen for buffers. This enables us to check certain properties on the system for that bound and to ensure that the system will preserve them whatever larger bounds are used for buffers. We also prove that computing this bound is undecidable but we show how we succeed in computing these bounds for many typical examples using heuristics and equivalence checking. We will also overview several properties of interest that communicating systems must satisfy when obtained via projection from choreography specifications.
Vlad Rusu (Inria Lille)
Date: Monday, March 7 – 11h00
Place: C103
Title: Incremental Language-Independent Program Verification.
Reachability Logic (RL) is a recently introduced formalism for defining the operational semantics of programming languages and for specifying program properties. As a program logic RL can be seen as a language-independent generalisation of Hoare Logics. Thus, using RL one can specify programs independently on the languages in which the programs are written and one can design program-verification techniques that are also language-independent. Several such techniques have already been proposed, all of which have a circular nature: they take as input a set of RL formulas (specifying a given program that one wants to verify, as well as some of its subprograms), and allow formulas in the set to be circularly used in proofs of other formulas, or even in their own proof. Such circular reasoning is essential for dealing with repetitive behaviour (e.g., loops or recursive calls). The reasoning is sound, but only in a monolithic way, in the sense that if a proof succeeds on a given set of formulas then all the formulas in the set are semantically valid; but if the proof fails, then nothing can be inferred about any formula’s validity or invalidity. In practice, this means that one is left with no information after unsuccessful proof attempts. In this paper we deal with this issue by proposing an incremental method for proving RL formulas. The proposed method takes as input a given formula specifying a given program fragment, together with a (possibly empty) set of helper formulas that specify some if its subprograms and that are were already proved valid (e.g., using our method or any other sound method). Then, certain conditions equivalent to RL formula validity are automatically checked. In case of success, the newly proved formula can be incrementally used as a helper in the proofs of other formulas, thereby proving increasingly larger program fragments. But in case of failure, unlike the case of monolithic methods, one is not left without information – one still knows the validity of the already proved helper formulas, and can build on this knowledge to make progress in the program’s proof. We illustrate the approach by successfully verifying the nontrivial Knuth-Morris-Pratt string-matching algorithm, whose verification had previously been tried without success using a monolithic method.
Thomas Chatain (ENS Cachan & LSV)
Date: Wednesday, March 9 – 14h00
Place: A006
Title: Characterization of Reachable Attractors Using Petri Net Unfoldings.
Attractors of network dynamics represent the long-term behaviours of the modelled system. Their characterization is therefore crucial for understanding the response and differentiation capabilities of a dynamical system. In the scope of qualitative models of interaction networks, the computation of attractors reachable from a given state of the network faces combinatorial issues due to the state space explosion. In this paper, we present a new algorithm that exploits the concurrency between transitions of parallel acting components in order to reduce the search space. The algorithm relies on Petri net unfoldings that can be used to compute a compact representation of the dynamics. We illustrate the applicability of the algorithm with Petri net models of cell signalling and regulation networks, Boolean and multi-valued. The proposed approach aims at being complementary to existing methods for deriving the attractors of Boolean models, while being generic since it applies to any safe Petri net.
Ludovic Patey (Univ. Paris Diderot)
Date: Monday, December 14 – 14h00
Place: B013
Title: Introduction aux mathématiques à rebours.
Suite à la malédiction des théorèmes d’incomplétude de Gödel, nous sommes condamnés pour l’éternité à chercher de nouveaux axiomes pour compléter nos théories mathématiques. Dans cette démarche fondationnelle, les mathématiques à rebours s’interrogent sur les axiomes minimaux nécessaires pour prouver les théorèmes de la vie de tous les jours. Au cours de cette présentation, nous introduirons les outils des mathématiques à rebours, et illustrerons les deux principales observations du domaine: – La plupart des théorèmes classiques ne nécessitent que des axiomes faibles – Beaucoup de ces théorèmes sont équivalents à l’un parmi cinq ensembles d’axiomes. [[1] S. Simpson, Subsystems of second order arithmetic, 2nd ed., CUP, 2009 [[2] D. Hirschfeldt, Slicing the truth, to appear, 2013
Shane Mansfield (Univ. Oxford)
Date: Tuesday, July 7 – 14h00
Place: C005
Title: Contextuality, Cohomology & Paradox.
Contextuality is a key feature of quantum mechanics that provides an important non-classical resource for quantum information and computation. Sheaf theory can be used to provide a general treatment of contextuality in quantum theory. However, contextual phenomena are found in other fields as well: for example, database theory. I will discuss my recent work in collaboration with Abramsky, Barbosa, Kishida & Lal ([http://arxiv.org/abs/1502.03097|http://arxiv.org/abs/1502.03097]) in which we develop this unified view of contextuality. This provides two main contributions: firstly, it exposes a remarkable connection between contexuality and logical paradoxes; secondly, an important class of contextuality arguments can be demonstrated to have a topological origin. More specifically, it is shown that “All-vs-Nothing” proofs of contextuality are witnessed by cohomological obstructions.
Piotr Polesiuk (Université de Wroclaw)
Date: Tuesday, June 16 – 14h00
Place: A006
Title: Logical relations for coherence of effect subtyping.
In a programming language with subtyping, a coercion semantics explicits the conversions from one type to another. Such a semantics is typically defined on typing derivations rather than on typing judgments. To avoid semantic ambiguity, it is expected to be coherent, i.e., independent of the typing derivation for a given typing judgment. In this talk I will present step-indexed logical relations for establishing the coherence of coercion semantics of programming languages with subtyping. To illustrate the effectiveness of the proof method, I will give an overview of a coherence proof for a type-directed CPS translation from a call-by-value lambda calculus with delimited continuations. Additionally, I will pinpoint the most interesting aspects of a Coq formalization of this work, in particular those that deal with step-indexing.
Sabine Frittella (LIF, Marseille)
Date: Tuesday, June 9 – 14h00
Place: C103
Title: Display calculi for dynamic modal logics.
First, we introduce display calculi and the key notions to understand and use them ; we assume familiarity with the following notions: normal modal logic, Kripke frames, sequent calculi and lattices. Then we expose the meta-theorem for cut-elimination for display calculi. Finally, we present the design of a display calculi for the logic EAK (Epistemic Actions and Knowledge).
Julien Henry (Verimag)
Date: Tuesday, May 5 – 14h00
Place: B013
Title: Analyse statique de programmes par interprétation abstraite et procédures de décision SMT.
L’analyse statique de programme a pour but de prouver automatiquement qu’un programme vérifie certaines propriétés. L’interprétation abstraite est un cadre théorique permettant de calculer des invariants de programme. Ces invariants sont des propriétés sur les variables du programme vraies pour toute exécution. La précision des invariants calculés dépend de nombreux paramètres, en particulier du domaine abstrait et de l’ordre d’itération utilisés pendant le calcul d’invariants. Nos travaux de recherche proposent plusieurs extensions de cette méthode qui améliorent la précision de l’analyse. L’interprétation abstraite peut également être rendue plus précise en distinguant tous les chemins d’exécution du programme, au prix d’une explosion exponentielle de la complexité. Le problème de satisfiabilité modulo théorie (SMT), dont les techniques de résolution ont été grandement améliorées cette décennie, permettent de représenter ces ensembles de chemins implicitement. Nous proposons d’utiliser cette représentation implicite à base de SMT et de les appliquer à des ordres d’itération de l’état de l’art pour obtenir des analyses plus précises. Nous proposons ensuite de coupler SMT et interprétation abstraite au sein de nouveaux algorithmes appelés Modular Path Focusing et Property-Guided Path Focusing, qui calculent des résumés de boucles et de fonctions de façon modulaire, guidés par des traces d’erreur. Notre technique a différents usages: elle permet de montrer qu’un état d’erreur est inatteignable, mais également d’inférer des préconditions aux boucles et aux fonctions. Nous appliquons nos méthodes d’analyse statique à l’estimation du temps d’exécution pire cas (WCET). Dans un premier temps, nous présentons la façon d’exprimer ce problème via optimisation modulo théorie, et pourquoi un encodage naturel du problème en SMT génère des formules trop difficiles pour l’ensemble des solveurs actuels. Nous proposons un moyen simple et efficace de réduire considérablement le temps de calcul des solveurs SMT en ajoutant aux formules certaines propriétés impliquées obtenues par analyse statique. Nous montrons que ce cas particulier du WCET suggère qu’il est possible d’améliorer les solveurs SMT en général grâce à des méthodes dérivées de l’analyse statique de programmes. Enfin, nous présentons l’implémentation de Pagai, un analyseur statique pour LLVM, qui calcule des invariants numériques grâce aux différentes méthodes proposées. Nous avons comparé les différentes techniques implémentées sur des programmes open-source et des benchmarks utilisés par la communauté.
Timo Jolivet (Institut de Mathématiques de Toulouse)
Date: Friday, April 24 – 14h00
Place: A006
Title: Fractals, indécidabilité et automates à plusieurs rubans.
On démontre que la propriété “être d’intérieur vide” est indécidable pour une famille simple et naturelle de fractals (définis par des systèmes de fonctions itérées affines 2D). Les méthodes utilisées font intervenir des automates à plusieurs rubans et une variante du problème de correspondance de Post. Ce travail a été réalisé en collaboration avec Jarkko Kari.
Arnaud Spiwack (CRI, Mines ParisTech)
Date: Tuesday, April 21 – 11h00
Place: A006
Title: Of Coq and interactive proofs.
The Coq proof assistant is a language, based on Martin-Löf’s dependent type theory, which is a synthesis of a (functional) programming language and a mathematical language. As such Coq, and dependent type theory in general, has a unique outlook on formal methods and program correctness. Proofs and programs can be interleaved in many ways, making it possible to keep proofs of program properties local. A the extreme end of this spectrum, it is possible to write programs in the form of constructive mathematical proofs. In this talk, I will give an overview of the Coq proof assistant and then I will open the hood and show how the interactive proof facilities of Coq are implemented. Historically, they used an idea due to Robin Milner for the proof assistant LCF (which gave us the ML programming language), but dependent type theories such as Coq allow for a more robust design. The API and implementation of the interactive proof facilities of Coq rely on the notion of monad which has been popularised by the Haskell programming language. This yields a powerful and flexible API whose deduction rules closely match those of Coq, whereas the implementation based on Milner’s design was limited to an approximation of these rules.
Cyril Cassagnes (Université du Luxembourg)
Place: B013
Title: Distributed system design: there is still pitfalls on the road ahead.
Heterogenous and distributed computer systems become the norm. That’s why, in this talk, we will discuss the two following aspects: The first aspect is linked to systems heterogeneity that is both software and hardware. For software, this heterogeneity appears also through the functional and non-functional properties of components or agents forming the system. The second aspect of these systems is related to data. The value of digital data has moved information systems at a strategic position, which must process more and more data. This has created large distributed systems. Indeed, these system aim to be distributed in order to achieve better performance thanks to computation and storage distribution. In both cases, it’s important to be able to verify the system behaviour or some algorithms that constitute it, especially when these systems are deployed in our daily environment. Indeed, these system are complex to implement or model and continue to increase in complexity. That’s why, tools are required. For instance, it’s relevant for the first aspect to be able to emphasize emergent behaviour resulting from sub-system interactions at the implementation time of a system of systems in order to guaranty a level of safety and/or predictability of the system
Martin Delacourt (CMM, Santiago du Chili)
Date: Friday, April 17 – 14h00
Place: A006
Title: Ensembles limites d’automates cellulaires associés à une mesure de probabilité.
Dans le monde des systèmes dynamiques, on s’intéresse souvent aux comportements asymptotiques et en particulier à l’ensemble limite, c’est à dire l’ensemble des points du système qui peuvent être visités arbitrairement tard au cours d’une évolution. Pour affiner la perception des comportements asymptotiques, on ajoute une mesure de probabilités sur l’ensemble de configurations initiales et on oublie les parties de l’espace qui tendent à disparaître. On considère ici le cas particulier des automates cellulaires qui sont un modèle classique de systèmes dynamiques discrets. Constitués d’une infinité de cellules formant un graphe régulier et ayant un état parmi un ensemble fini, leur évolution se fait en appliquant une même règle locale en tous points simultanément. On donnera une caractérisation des ensembles limites associés à une mesure de probabilités, des grands ensembles classiques de sous-shifts qui peuvent être réalisés et un théorème de Rice sur ces ensembles.
Pierre Lescanne (ENS Lyon)
Date: Tuesday, April 14 – 14h00
Place: B013
Title: Coinduction, Equilibrium and Rationality of Escalation.
Escalation is the behavior of players who play forever in the same game. Such a situation is a typical field for application of coinduction which is the tool conceived for reasoning in infinite mathematical structures. In particular, we use coinduction to study formally the game called –dollar auction–, which is considered as the paradigm of escalation. Unlike what is admitted since 1971, we show that, provided one assumes that the other agent will always stop, bidding is rational, because it results in a subgame perfect equilibrium. We show that this is not the only rational strategy profile (the only subgame perfect equilibrium). Indeed if an agent stops and will stop at every step, whereas the other agent keeps bidding, we claim that he is rational as well because this corresponds to another subgame perfect equilibrium. In the infinite dollar auction game the behavior in which both agents stop at each step is not a Nash equilibrium, hence is not a subgame perfect equilibrium, hence is not rational. Fortunately, the notion of rationality based on coinduction fits with common sense and experience. Finally the possibility of a rational escalation in an arbitrary game can be expressed as a predicate on games and the rationality of escalation in the dollar auction game can be proved as a theorem which we have verified in the proof assistant COQ. In this talk we will recall the principles of infinite extensive games and use them to introduce coinduction and equilibria (Nash equilibrium, subgame perfect equilibrium). We will show how one can prove that the two strategy profiles presented above are equilibria and how this leads to a “rational” escalation in the dollar auction. We will show that escalation may even happen in much simpler game named 0,1-game.
Bernadette Charron-Bost (CNRS, LIX)
Date: Thursday, April 9 – 15h00
Place: C103
Title: Approximate Consensus in Highly Dynamic Networks.
(joint work with Matthias Függer and Thomas Nowak) We investigate the approximate consensus problem in highly dynamic networks in which topology may change continually and unpredictably. We prove that in both synchronous and partially synchronous networks, approximate consensus is solvable if and only if permanently, the communication graph has a rooted spanning tree. The striking point is that the spanning tree may continually change over time without preventing nodes from converging to consensus. Actually the class of averaging algorithms, which have the benefit of being memoryless and requiring no process identifiers, entirely captures the solvability issue of approximate consensus in that the problem is solvable if and only if it can be solved using any averaging algorithm. We develop a proof strategy which consists in a reduction to the “non-split” networks. It dramatically improves the best known upper bound on the decision times of averaging algorithms and yields a non-averaging algorithm for approximate consensus in non-anonymous networks with a decision time that is quadratic in the number of nodes. We also prove that a general upper bound on the decision times of averaging algorithms has to be exponential, shedding light on the high price of anonymity. Finally we apply our results to networked systems with a fixed topology and benign fault models to show that with n processes, up to 2n-3 link faults per round can be tolerated for approximate consensus, increasing by a factor 2 the tight bound for exact consensus established by Santoro and Widmayer.
Vincent Hugot (Inria Lille)
Date: Tuesday, April 7 – 11h00
Place: B013
Title: Model-checking term rewriting systems: reachability analysis guided by temporal properties.
We present a model-checking framework summarised as a variant of reachability analysis for term rewriting systems. Reachability analysis endeavours to show that “a bad state is never reached”, and relies on the computation of an over-approximation of the set of reachable terms, which is of course not calculable in general. We aim to generalise this approach, in the sense of lifting it from the specific statement “a bad state is never reached” to arbitrary specifications in some temporal logic — though in the case we study, the statements dictate the order in which the transitions of the system may occur. For now, the method supports a fragment of LTL sufficiently expressive to describe common safety properties. In a first step, translation rules reformulate the verification problem into an equivalent expression of propositional logic whose atoms are comparisons of languages obtained through rewriting, without regard for decidability; we call such a formula a rewrite proposition. In the second step, the rewrite proposition is given a concrete representation in terms of tree automata with or without constraints. Since the general problem is undecidable, these representations are sometimes approximations, for which we use constructions (tree automata completion) studied separately for reachability analysis; the end result is a positively approximated decision procedure for the general problem, ie. a procedure that yields “yes, the system is correct” or “maybe”. (Actually there may be several ways to approximate the proposition, and thus a set of such procedures is generated). We draw a few perspectives regarding the future development of this method, in particular the extension of the supported logic, and the introduction of rewrite strategies into the mix, e.g. innermost.
Marc Kaplan (Telecom ParisTech)
Place: B013
Title: Quantum attacks against iterated block ciphers.
We study the amplification of security against quantum attacks provided by iteration of block ciphers. In the classical case, the Meet-in-the-middle attack is a generic attack against those constructions. This attack reduces the time required to break double iterations to only twice the time it takes to attack a single ideal block cipher, where a naive cryptographer would expect double iteration to lead to a quadratic improvement of security. We introduce the quantized version of this attack which is an application of Ambainis’ celebrated quantum algorithm for the Element Distinctness problem. We then prove the optimality of the attack against two iterated ideal ciphers. An important consequence is that if a quantum adversary is capable of breaking a single encryption in time t, the time to break two ideal iterated cipher is at least t^(4/3). In other words, the amplification of security provided by iteration is much larger against quantum adversaries than against classical ones. We then investigate security amplification by composition further by examining the case of four iterations. We quantize a recent technique introduced by Dinur, Dunkleman, Keller and Shamir, called the dissection attack. In the classical case, this attack allows to save a great amount of memory. Surprisingly, the quantized version also leads to saving in time, compared to a meet-in-the-middle approach. This seems to indicate that when the number of iterations grows, the resistance against quantum attacks decreases.
Antoine Durand-Gasselin (TU Münich)
Date: Wednesday, April 1st – 14h00
Place: B013
Title: Model Checking Parameterized Asynchronous Shared-Memory Systems.
This is joint work with Javier Esparza, Pierre Ganty and Rupak Majumdar. These results have been submitted to CAV. We characterize the complexity of liveness verification for parameterized systems consisting of a leader process and arbitrarily many anonymous and identical contributors. Processes communicate through a shared, bounded-value register. While each operation on the register is atomic, there is no synchronization primitive to execute a sequence of operations atomically. We analyze the case in which processes are modeled by finite-state machines or pushdown machines and the property is given by a Büchi automaton over the alphabet of read and write actions of the leader. We show that the problem is decidable, and has a surprisingly low complexity: it is NP-complete when all processes are finite-state machines, and is PSPACE-hard and in NEXPTIME when they are pushdown machines. This complexity is lower than for the non-parameterized case: liveness verification of finitely many finite-state machines is PSPACE-complete, and undecidable for two pushdown machines. For finite-state machines, we characterize infinite behaviors using existential abstraction and semilinear constraints. For pushdown machines, we show how contributor computations of high stack height can be simulated by computations of many contributors, each with low stack height. Together, our results characterize the complexity of verification for parameterized systems under the assumptions of anonymity and asynchrony.
Nicolas Basset (University of Oxford)
Date: Tuesday, March 31 – 14h00
Place: B013
Title: Synthèse de contrôleurs pour les jeux stochastiques multi-composantes avec objectif multiple.
Je présenterai les résultats de deux articles récents concernant la synthèse de contrôleurs (aussi appelés stratégies) pour les systèmes modélisés par des jeux stochastiques avec conditions de gain multiple. Dans le premier article (CONCUR 2014), nous introduisons un produit parallèle de jeux stochastiques. Nous montrons comment les règles compositionnelles de vérification de type assomption-garantie des automates probabilistes (jeux stochastiques à un joueur) peuvent être relevées à notre cadre. Nous expliquons comment composer des stratégies gagnantes pour les composantes en une stratégie gagnante globale pour le jeu composé. Dans le second article (TACAS 2015), nous nous intéressons à la satisfaction presque-sûre de plusieurs objectifs de gain moyen (“mean-payoff“). Nous donnons une méthode de synthèse de stratégie epsilon-optimale pour ce type de condition de victoire. C’est à dire, étant donné un vecteur objectif Pareto optimal v, la stratégie produite permet de maintenir le gain moyen multi-dimensionnel au dessus du vecteur de seuil v – epsilon. Nous utilisons cette méthode de synthèse comme brique élémentaire dans la méthode compositionnelle de l’article précédent (CONCUR 2014), que nous illustrons sur un cas d’étude issue de l’avionique. Ce travail a été réalisé avec Clemens Wiltsche, Marta Kwiatkowska et Ufuk Topcu.
Lionel Rieg (Collège de France)
Date: Thursday, March 26 – 15h00
Place: B011
Title: Preuves formelles de sûreté en Coq : réseaux de robots et compilation de langages synchrones.
Dans une première partie, je présenterai le projet Pactole qui vise à construire un cadre formel en Coq pour raisonner sur des réseaux de robots. Ce cadre doit être assez souple pour pouvoir intégrer les différentes variantes de modèles (présence de robots byzantins, capacité de perception des robots, …) en changeant ses paramètres. J’illustrerai l’intérêt de ce cadre sur un problème fondamental : le rassemblement (sur une droite), qui cherche à rassembler des robots en un même point. En toute généralité, ce problème est connu pour être impossible, preuve formelle dans Pactole à la clé. Néanmoins, je donnerai un algorithme qui fonctionne hormis pour une unique position initiale, précisément celle utilisée pour montrer l’impossibilité dans le cas général. On verra ensuite comment généraliser ces résultats en dimension plus grande. Dans un second temps, je m’intéresserai à la compilation du langage synchrone Esterel. L’objectif de ce travail est de construire un compilateur prouvé d’Esterel vers les circuits, comme CompCert le fait de C vers l’assembleur. À l’inverse des langage usuels, les langages synchrones parlent de façon précise du temps, ce qui influent beaucoup sur leur conception. Ainsi, les difficultés de la compilation sont ainsi de nature très différentes, notamment car la cible n’est pas un langage de programmation. Je présenterai les enjeux de la compilation et une partie des différentes sémantiques qui permettent de relier le langage source à sa traduction en circuits.
Tarik Kaced (Univ. Hong Kong)
Date: Tuesday, March 24 – 11h00
Place: A006
Title: Optimal Non-Perfect Uniform Secret Sharing Schemes.
A secret sharing scheme is non-perfect if some subsets of participants cannot recover the secret value but have some information about it. This work is dedicated to the search of efficient non-perfect secret sharing schemes. The efficiency is measured by means of the information ratio, the ratio between the maximum length of the shares and the length of the secret value. In order to study perfect and non-perfect secret sharing schemes with all generality, we describe the structure of the schemes through their access function, a real function that measures the amount of information that every subset of participants knows about the secret value. We present new tools for the construction of secret sharing schemes. In particular, we construct a secret sharing scheme for every access function. We find a new lower bound on the information ratio that is better than the ones previously known. In particular, this bound is tight for uniform access functions. The access function of a secret sharing scheme is uniform if all participants play the same role in a scheme (e.g. ramp secret sharing schemes). Moreover, we construct a secret sharing scheme with optimal information ratio for every rational uniform access function.
Assalé Adjé (ONERA Toulouse)
Place: A006
Title: Vérification automatique d’invariants numériques de programmes par optimisation sous-contraintes et itérations sur les politiques.
En général, la vérification d’une propriété sur des programmes numériques est indécidable. Cependant, la théorie de l’interprétation abstraite permet de prouver qu’une propriété est vraie en construisant une sur-approximation calculable des valeurs possibles prises par les variables du programme et sur laquelle la propriété est vraie. Dans cette présentation, je montrerai comment construire une telle sur-approximation en utilisant des méthodes d’optimisation sous-contraintes. Dans un deuxième temps, je considérerai le problème suivant: comment obtenir une sur-approximation la plus précise possible. En effet, quand on s’intéresse, par exemple, à prouver que les valeurs atteignables sont bornées, la sur-approximation la plus précise possible est recherchée. Je montrerai comment un algorithme de calcul de point fixe issu de la théorie des jeux comme l’itération sur les politiques permet d’obtenir des bornes plus précises.
Thomas Chatain (LSV)
Date: Friday, March 13 – 14h00
Place: A006
Title: Implementability issues for real-time distributed systems.
Formal models for real-time distributed systems, like time Petri nets and networks of timed automata have proved their interest for the verification of real-time systems. On the other hand, the question of using these models as specifications for designing real-time systems raises several difficulties. Here we focus on the ones that are related to the distributed nature of the system. Implementing a model may be possible at the cost of some transformations, which make it suitable for the target device. In this talk, I present several results about semantics of distributed real-time systems and provide methods for the design of such systems.
Jannik Dreier (Electronic) Exams.__ [Jannik Dreier|http://people.inf.ethz.ch/jdreier/] (ETH Zürich)
Date: Tuesday, March 10 – 11h00
Place: B013
Title: Formal Verification of (Electronic) Exams.
Nowadays, students can be assessed not only by means of classical pencil-and-paper tests, but also using electronic exams. Such electronic exams can be taken in examination centers, or sometimes even from home via the internet. Electronic exams are appealing as they can reach larger audiences by simplifying the exam process, but they are exposed to new threats, including not only dishonest students, but also outside attackers, and misbehaving authorities. These threats are amplified by two issues: the lack of understanding of what security means for electronic exams (except the old concern about student cheating), and the absence of tools to verify whether an exam system is secure. In this talk, I will present formal definitions of several fundamental authentication, privacy and verifiability properties for exam protocols. As case studies, I will discuss three exam protocols: two recent electronic exam protocols, and the pencil-and-paper exam used by the University of Grenoble, showing that our approach can also cover classical exam systems. The automatic analysis using ProVerif highlights several weaknesses, underlining the need for formal verification.
Chantal Keller (Microsoft Research Cambridge & MSR-Inria Joint Centre)
Place: B013
Title: F*: Higher-Order Effectful Program Verification.
F* is an ML-like language designed for program verification. It is based on refinement types, a powerful extension of ML types to express properties about programs. These properties are automatically checked via the generation of verification conditions that are finally discharged by SMT solvers. F* provides a uniform framework to deal both with programs with effects and non-termination, for which one might want to establish some properties, and pure and terminating programs, that can also be used inside the logical refinements. To this end, it relies on a fine-grained control of effects and a termination checker based on a semantic criterion. F* finally generates code to various backends, ranging from OCaml to JavaScript. After giving an overview of this language, I will present some aspects of its internals, in particular the treatment of effects and termination, and the generation of verification conditions all the way down to SMT solvers. I will finally discuss ongoing work on a full certification of F*.
Henning Schnoor (Univ. Kiel)
Date: Wednesday, March 4 – 14h00
Place: A006
Title: Information-flow Security: Definitions, Characterizations and Verification Complexity.
We develop a theory for state-based noninterference in a setting where different security policies apply in different parts of a given system. Our theory comprises appropriate security definitions, characterizations of these definitions, for instance in terms of unwindings, algorithms for analyzing the security of systems with local policies, and corresponding complexity results. Additionally, we study the verification complexity for established notions of noninterference such as IP-security and TA-security. In both of these cases, we obtain NL algorithms, significantly improving on previously known bounds.
Eugen Zalinescu (ETH Zurich)
Date: Thursday, March 5 – 11h00
Place: A006
Title: Monitoring of Temporal First-order Properties with Aggregations.
In system monitoring, one is often interested in checking properties of aggregated data. Current policy monitoring approaches are limited in the kinds of aggregations they handle. To rectify this, we extend an expressive language, metric first-order temporal logic, with aggregation operators. Our extension is inspired by the aggregation operators common in database query languages like SQL. We provide a monitoring algorithm for this enriched policy specification language. We show that, in comparison to related data processing approaches, our language is better suited for expressing policies, and our monitoring algorithm has competitive performance.
Benjamin Hellouin (Univ. Andres Bello, Santiago)
Date: Tuesday, February 24 – 14h00
Place: A006
Title: Mesures limites dans les automates cellulaires.
Je m’intéresse au comportement des automates cellulaires sur une configuration initiale tirée au hasard, ce qui revient à étudier leur action sur l’espace des mesures de probabilité. Je présenterai un panorama de mes recherches sur les mesures limites de ces systèmes, qui peuvent être vues comme leur comportement asymptotique typique ou à leur sortie en tant qu’algorithme probabiliste. En dimension 1, les mesures atteignables en partant d’une mesure simple, e.g. la mesure uniforme, sont entièrement décrivables par des conditions de calculabilité. Porter ce résultat aux dimensions supérieures est un travail en cours qui a déjà donné des résultats partiels. Cependant cette approche de permet pas d’obtenir des mesures limites de support plein, qui ne peuvent être atteintes que par des dynamiques surjectives. Bien que les automates cellulaires surjectifs soient capables de calcul, leur action sur les mesures possède une forte rigidité qui est encore peu comprise. On conjecture expérimentalement que certaines sous-classes font converger toutes les mesures initiales simples vers la mesure uniforme. Nous produisons la première preuve complète de ce phénomène dans un automate cellulaire ayant des propriétés algébriques particulières, utilisant des structures autosimilaires dans son évolution temporelle.
Hicham Lakhlef (Univ. Franche-Comté)
Date: Tuesday, February 17 – 14h00
Place: A006
Title: Algorithmes distribués pour l’optimisation du déploiement des microrobots MEMS : des méthodes formelles sont nécessaires?
Dans ce séminaire je vais présenter quelques travaux que j’ai faits dans ma thèse. Je parlerai sur des algorithmes distribués pour la reconfiguration des micorobots MEMS. Les microrobots MEMS gagnent de plus en plus une attention croissante puisque ils peuvent effectuer plusieurs missions et tâches dans une large gamme d’applications, y compris la localisation d’odeur, la lutte contre les incendies, le service médical, la sécurité et le sauvetage. Ils peuvent faire des tâches dans des environnements qui sont caractérisés en étant non structurés, complexes, dynamiques, et inconnus. Pour faire ces missions et tâches les microrobots MEMS doivent appliquer des protocoles de redéploiement afin de s’adapter aux conditions du travail. Ces protocoles de reconfiguration dévoient traiter avec les caractéristiques des nœuds MEMS notamment les ressources (mémoire, énergie) limités. Aussi, ces algorithmes devraient ‘tre efficaces, évolutifs, robustes et utilisent seulement les informations locales. Dans ce séminaire, je présente des algorithmes de reconfiguration efficaces pour des microrobots MEMS modulaires. L’objectif de ces algorithmes est d’optimiser la topologie logique des microrobots en utilisant des protocoles de redéploiement distribués. Je présente et je compare deux algorithmes distribués de reconfiguration d’une chaîne à un carré. Le premier algorithme est dynamique dans le sens réveil-sommeil assurant la connexité du réseau durant le processus de reconfiguration et l’autre qui est plus rapide n’assure pas la connexité du réseau durant le processus de reconfiguration. Je montre comment utiliser le parallélisme en mouvements pour optimiser le temps d’exécution des algorithmes et minimiser le nombre de mouvements des nœuds tout en gardant le réseau connexe et les algorithmes robustes et évolutifs. En fin, je présente un algorithme de reconfiguration qui traite avec toute typologie de départ. Dans les perspectives, je montrerais que des méthodes formelles sont nécessaires pour prouver que les systèmes de microrobots programmés en utilisant des protocoles distribués vont toujours se comporter comme prévu, dans des conditions changeantes et dans une variété d’environnements incertains.
Xavier Urbain (CNAM)
Place: A006
Title: Un cadre pour la preuve formelle adapté aux réseaux de robots mobiles.
Les réseaux de robots mobiles reçoivent depuis quelques années une attention croissante de la part de la communauté de l’algorithmique distribuée. Si l’utilisation d’essaims de robots coopérant dans l’exécution de diverses tâches est une perspective séduisante, l’analyse de la faisabilité de certaines tâches dans ce cadre émergent est extrêmement ardue, en particulier si certains robots présentent des comportements dits byzantins, c’est-à-dire arbitraires voire hostiles. Pour obtentir des garanties formelles dans ce contexte, nous proposons un cadre mécanique formel fondé sur l’assistant à la preuve Coq et adapté aux réseaux de robots. Nous nous intéressons principalement dans cet exposé aux résultats d’impossibilité, fondamentaux en ce qu’ils établissent ce qui peut ou ne peut pas être réalisé et permettent de définir des bornes et, par là, l’optimalité de certaines solutions. Utiliser un assistant comme Coq travaillant à l’ordre supérieur nous permet d’exprimer aisément des quantifications sur les algorithmes, ceux-ci étant considérés comme des objets abstraits. Nous illustrons les possibilités offertes par notre développement en présentant les premières preuves formelles (et donc certifications) de certains résultats comme l’impossibilité de la convergence de robots amnésiques lorsqu’un tiers d’entre eux sont byzantins, ou encore l’impossibilité du rassemblement pour un nombre pair de robots évoluant dans R.
Erika Abraham (RWTH Aachen)
Date: Tuesday, June 12 – 14h00
Place: C005
Title: Reachability Analysis of Hybrid Systems.
Hybrid systems exhibit both dynamic and discrete behavior. Typically, the dynamic behavior stems from the continuous evolution of physical systems, whereas the discrete behavior stems from the execution steps of discrete controllers. As a modeling language we consider hybrid automata, extending discrete automata with continuous dynamics. Given a hybrid automaton model of a hybrid system, the perhaps most basic question one could be interested in is whether certain model states can be reached from a given initial state. Though this reachability problem sounds simple, it is undecidable for all but some very simple subclasses of hybrid automata. Nevertheless, there are different techniques to solve it in an incomplete manner. We discuss methods that use different geometric objects like polytopes, zonotopes, ellipsoids etc. to represent state sets, and apply a forward fixedpoint-based search to over-approximate the set of reachable states. Such methods need to determine successor sets under both discrete jumps and continuous evolution. The latter is often done by flowpipe construction, paving the whole flow by a set of geometric objects of the chosen type.
David Deharbe (Univ. Rio Grande do Norte)
Date: Tuesday, May 20 – 14h00
Place: B013
Title: b2llvm: B developments onto the LLVM.
We present b2llvm, a code generator for the B-method targetting LLVM intermediate format. b2llvm currently handles the following elements of the B language: simple data types, imperative instructions and component compositions. The translation rules from some essential constructs of the B language for implementations towards LLVM source code are presented and illustrated with examples. Some verification and validation strategies are proposed.
Jasmin Blanchette (TU Munich)
Date: Tuesday, April 29 – 14h00
Place: B013
Title: Proofs, Counterexamples, and Codatatypes for Isabelle/HOL.
Isabelle is a generic proof assistant that supports HOL (Higher-Order Logic), ZF (Zermelo-Fraenkel Set Theory), and TLA+ (Temporal Logic of Actions). My research so far has focused on Isabelle/HOL: (1) I enhanced the popular Sledgehammer proof tool, which is based on external automatic provers (e.g., E, SPASS, Vampire, and Z3); (2) I designed the SAT-based counterexample generator Nitpick; (3) together with a few colleagues, I am adding support for codatatypes and corecursive functions. In this talk, I will briefly summarize these strands of research and my future plans.
Clément Aubert (IML)
Date: Wednesday, March 19 – 14h00
Place: B013
Title: Quelques contributions de la Logique Linéaire à la théorie de la complexité.
En théorie de la démonstration, la Logique Linéaire propose une formalisation des preuves qui place leur dynamique au centre des investigations et porte une attention soutenue aux opérations structurelles. Depuis l’introduction de ses variantes dites bornées et allégées, la Logique Linéaire a contribué à la théorie de la complexité de différentes façons. Nous présenterons différentes avancées en nous focalisant sur les caractérisations implicites de classes de complexité, c’est-à-dire indépendantes des mesures sur des modèles abstraits. En utilisant une gamme d’outils syntaxiques et sémantiques, la Logique Linéaire est parvenue à proposer des caractérisations de l’exécution parallèle, du non-déterminisme, ou encore des programmes dit efficaces en temps ou en espace. Nous dresserons un rapide panorama de ces méthodes en les liant aux autres approches implicites de la complexité et en mettant en avant nos contributions.
Rohit Parikh (CUNY)
Date: Friday, November 8 – 15h00
Place: C005
Title: Knowledge is Power, and so is Communication.
The BDI theory says that people’s actions are influenced by two factors, what they believe and what they want. Thus we can influence people’s actions by what we choose to tell them or by the knowledge that we withhold. Shakespeare’s Beatrice-Benedick case in Much Ado about Nothing is an old example. Currently we often use Kripke structures to represent knowledge (and belief). So we will address the following issues: a) How can we bring about a state of knowledge, represented by a Kripke structure, not only about facts, but also about the knowledge of others, among a group of agents? b) What kind of a theory of action under uncertainty can we use to predict how people will act under various states of knowledge? c) How can A say something credible to B when their interests (their payoff matrices) are in partial conflict?
Iordanis Kerenidis (CNRS, LIAFA)
Date: Tuesday, October 15 – 14h00
Place: A008
Title: Tutorial on Quantum Communication.
Part of the [Journées Informatique Quantique|http://www.loria.fr/~sperdrix/JIQ13.html] 2013, October 14 and 15.
Josef Widder (TU Wien)
Date: Wednesday, October 9 – 14h00
Place: C005
Title: Parameterized Model Checking of Fault-tolerant Distributed Algorithms by Abstraction.
Fault-tolerant distributed algorithms are central for building reliable, spatially distributed systems. In order to ensure that these algorithms actually make systems more reliable, we must ensure that these algorithms are actually correct. Unfortunately, model checking state-of-the-art fault-tolerant distributed algorithms (such as Paxos) is currently out of reach except for very small systems. In order to be eventually able to model check such fault-tolerant distributed algorithms also for larger systems, several problems have to be addressed. In this talk, I will present recent work that considers modeling and verification of fault-tolerant algorithms that basically only contain threshold guards to control the flow of the algorithm. I will present an abstraction technique that allows parameterized model checking of safety and liveness properties of such algorithms. Our experimental data shows that our abstraction is precise, and allows automated verification of simple, fault-tolerant distributed algorithms for various fault models. As threshold guards are widely used in fault-tolerant distributed algorithms (and also in Paxos), efficient methods to handle them bring us closer to the above mentioned goal.
David Janin (LaBRI)
Date: Tuesday, October 8 – 15h00
Place: B013
Title: The philosopher diner revisited: a new algebraic approach for an old modeling problem ?
In the field of distributed algorithm, the philosopher diner is a typical algorithmical and modeling problem. How to model distributed states? Local transitions? Their combinations? Resulting global transitions? Correctness criteria? There already are many modeling proposals. In this talk, we will provide yet another one. We will show how inverse semigroup theory — a mathematical theory developed since the 50’s yet quite unknown in computer science — and the associated language theory provide robust concepts and tools for that modeling. This is a joint (exploratory) work with Anne Dicky to be presented at MSR 2013 in Rennes. Oddly enough to be mentioned, such an approach was inspired by recent experiments in music system modeling and programming.
Bruno Conchinha (ETH Zurich)
Date: Tuesday, September 24 – 14h00
Place: C005
Title: Symbolic Probabilistic Analysis of Off-line Guessing.
We introduce a probabilistic framework for the automated analysis of security protocols. Our framework provides a general method for expressing properties of cryptographic primitives, modeling an attacker who is more powerful than conventional Dolev-Yao attackers. It allows modeling equational properties of cryptographic primitives as well as property statements about their weaknesses, e.g. primitives leaking partial information about messages or the use of weak algorithms for random number generation. Moreover, we can use these properties to find attacks and estimate their success probability. Existing symbolic methods can neither model such properties nor find such attacks. We show that the probability estimates we obtain are negligibly different from those yielded by a generalized random oracle model based on sampling terms into bitstrings while respecting the stipulated properties of cryptographic primitives. As case studies, we use a prototype implementation of our framework to model non-trivial properties of RSA encryption and automatically estimate the probability of off-line guessing attacks on the EKE protocol.
Mark Ryan (University of Birmingham)
Date: Tuesday, July 23 – 14h00
Place: C005
Title: Malicious-but-cautious certificate authorities.
The “certificate authority” model for authenticating public keys of websites has been attacked in recent years, and several proposals have been made to reinforce it. We develop and extend a recent proposal called “certificate transparency”, so that it properly handles certificate revocation. We show how extended certificate transparency has applications beyond the web. In particular, we detail how it can be used to build a secure end-to-end email or messaging system using PKI with no trusted parties. We believe this finally makes end-to-end encrypted email as usable and user-friendly as encrypted web browsing is today. Underlying these ideas is a new attacker model appropriate for cloud computing, which we call “malicious-but-cautious”.
Florian Deloup (Institut de Mathématiques de Toulouse, Univ. Toulouse III)
Date: Tuesday, June 25 – 14h00
Place: C005
Title: Le genre des langages réguliers.
Si l’on oublie une partie de sa structure, un automate peut être regardé comme un graphe pour lequel la notion de genre est bien définie. Intuitivement, un graphe est de genre plus petit que g s’il existe un plongement du graphe dans une sphère à “g anses”. Bien entendu, du point de vue informatique, les automates sont plus riches que des graphes : ils calculent des langages et un langage régulier donné admet une infinité d’automates le reconnaissant. Il est alors naturel de définir le genre d’un langage régulier L comme le genre minimal des automates reconnaissant L. On exposera quelques résultats et quelques questions ouvertes sur ce nouvel invariant des langages.
Sylvain Salvati (INRIA, LaBRI)
Date: Wednesday, May 15 – 14h00
Place: C005
Title: Evaluation is MSOL compatible.
We consider simply-typed lambda calculus with fixpoint operators. Evaluation of a term gives as a result the Bohm tree of the term. We show that evaluation is compatible with monadic second-order logic (MSOL). This means that for a fixed finite vocabulary of terms, the MSOL properties of Bohm trees of terms are effectively MSOL properties of terms themselves. Theorems of this kind have been known for some graph operations: unfolding, and Muchnik iteration. Similarly to those results, our main theorem has diverse applications. It can be used to show decidability results, to construct classes of graphs with decidable MSOL theory, or to obtain MSOL formulas expressing properties of terms. Another application is decidability of a control-flow synthesis problem. It is a joint work with Igor Walukiewicz.
Joost-Pieter Katoen (Aachen University)
Date: Tuesday, April 16 – 14h00
Place: C005
Title: Analyzing Probabilistic Programs: Pushing the Limits of Automation.
Probabilistic programs are pivotal for cryptography, privacy and quantum programs, are typically a few number of lines, but hard to understand and analyse. The main challenge for their automated analysis is the infinite domain of the program variables combined with the occurrence of parameters. Here, parameters can range over concrete probabilities but can also encompass various aspects of the program, such as number of participants, size of certain variables etc. McIver and Morgan have extended the seminal proof method of Floyd and Hoare for sequential programs to probabilistic programs by making the program annotations real- rather than Boolean-valued expressions in the program variables. As in traditional program analysis, the crucial annotations are the loop invariants. The problem to synthesize loop invariants is very challenging for probabilistic programs where real-valued, quantitative invariants are needed. In this talk, we present a novel approach to synthesize loop invariants for the class of linear probabilistic programs. In order to provide an operational interpretation to the quantitative annotations, we give a connection between McIver and Morgan’s wp-semantics of probabilistic programs and parameterized Markov decision processes.
Binh Thanh Nguyen (ETH Zurich)
Date: Tuesday, April 9 – 14h00
Place: C005
Title: Sound Security Protocol Transformations.
We propose a class of protocol transformations, which can be used to # develop (families of) security protocols by refinement and # abstract existing protocols to increase the efficiency of verification tools. We prove the soundness of these transformations with respect to an expressive security property specification language covering secrecy and authentication properties. Our work clarifies and significantly extends the scope of earlier work in this area. We illustrate the usefulness of our approach on a family of key establishment protocols.
Induction in SMT: Seminar day
Date: Wednesday, March 27 –
Place: B013
11:00 – 12:00 __Decidability of inductive theorems based on rewriting induction__. Takahito Aoto Rewriting induction (Reddy, 1990) is a method for proving inductive theorems of term rewriting systems (TRSs). It is known that generally it is undecidable that given equation and TRS whether the equation is an inductive theorem of the TRS. Toyama (2002) and Falk and Kapur (2006) gave some characterization of equations and TRSs for which it is decidable whether the equation is the inductive theorem of the TRS or not, based on rewriting induction, that is, using the rewriting induction as the underlying decision procedure. We give an abstract view of this approach, and try to integrate some known decidable classes. 15:00 – 16:00 __Rewriting Induction + Linear Arithmetic = Decision Procedure__. Stephan Falke. This talk discusses new results on the decidability of inductive validity of conjectures. For these results, a class of term rewrite systems (TRSs) with built-in linear integer arithmetic is introduced and it is shown how these TRSs can be used in the context of inductive theorem proving. The proof method developed for inductive theorem proving couples (implicit) inductive reasoning with a decision procedure for the theory of linear integer arithmetic with (free) constructors. The effectiveness of the new decidability results on a large class of conjectures is demonstrated by an evaluation of the prototype implementation Sail2. 16:00 – 17:00 __Noetherian Induction for First-Order Reasoning__. Sorin Stratulat. Noetherian induction is a powerful mathematical principle that allows to reason on sets of unbounded number of elements, provided that there is a Noetherian ordering that forbids the construction of any infinite strictly decreasing sequence of elements. In first-order logic, we can distinguish two instantiations of it, depending on the kind of elements we are reasoning on: terms or (first-order) formulas. The term-based principles rely on schemata. An induction schema attaches a set of formulas as induction hypotheses (IH) to another formula, called induction conclusion, with the intention that the IHs be used in the proof of the induction conclusion. Widely-spread among proof assistants, it can be easily integrated into sequent-based inference systems as a separate inference rule. The induction reasoning is local, at schema level, and the induction orderings may differ from one schema to another. However, it is not lazy since the IHs should be defined (sometimes long) before their use or are not used at all. Moreover, it cannot manage naturally the mutual induction with other formulas from the proof. The formula-based principles are issued from the Musser’s inductionless induction (or proof by consistency) method based on the Knuth-Bendix completion algorithm. The implementing reasoning methods are lazy and allow mutual induction. On the other hand, they are reductive, i.e. the newly derived formulas in the proof should be smaller (or equal), at the ground level, than the currently processed formula, and the induction ordering is unique for a given proof. In this talk, we sketch a cycle-based induction method that generalizes and keeps the best features of the above methods. The application of any IH is validated by a cycle of formulas governed by an induction ordering. The reasoning is local, at cycle level, reductive-free, and naturally performs mutual and lazy induction. From a theoretical point of view, our method synthesizes the overall usage of Noetherian induction reasoning in first-order logic. From a practical point of view, it allows for less restrictive specifications, more efficient implementations and proof certification processes.
Professor Neil Jones (University of Copenhagen, visiting UL – LORIA)
Date: Wednesday, March 27 – 14h00
Place: A008
Title: Obfuscation by Partial Evaluation of Distorted Interpreters
(with Roberto Giacobazzi, Isabella Mastroeni, Verona; from PEPM 2012) How to construct a general program obfuscator? We present a novel approach to automatically generating obfuscated code P’ from any program P whose source code is given. Start with a (program-executing) interpreter interp for the language in which P is written. Then “distort” interp so it is still correct, but its specialization P’ with respect to P is transformed code that is equivalent to the original program, but harder to understand or analyze. Potency of the obfuscator is proved with respect to a general model of the attacker, modeled as an approximate (abstract) interpreter. A systematic approach to distortion is to make program P obscure by transforming it to P’ on which (abstract) interpretation is incomplete. Interpreter distortion can be done by making residual in the specialization process sufficiently many interpreter operations to defeat an attacker in extracting sensible information from transformed code. Our method is applied to: code flattening, data-type obfuscation, and opaque predicate insertion. The technique is language independent and can be exploited for designing obfuscating compilers.
Christophe Calvès (UL – LORIA)
Date: Tuesday, March 26 – 14h00
Place: A008
Title: Multi-Focus Rewrite Rules
Rewrite rules are frequently used to perform program transformations. This includes compilation (optimisations, code generation, …) and evaluation to specify program semantics, for state exploration or just interpretation. This talk presents multi-focus rewrite rules and strategies, able to traverse, match and rewrite, in a single rule, several subterms of a subject. These rules are particulary well-suited for programming-languages semantics’s specification as it often requires to consider serval elements (code fragment, memory, …) to perform actions. We will demonstrates the modularity and compositionality of our approach with the specification, in TOM (a compiler embedding algebraic terms and rewrite rules in many languages), of the semantics of a ML-like toy language.
Quantic Day
Date: Thursday, March 21 – 10h00-18h00
Place: C103
Le jeudi 21 mars aura lieu une journée sur la thématique de l’informatique quantique en C103.
10h00 – Accueil
10h30 – 11h15 Gilles Dowek – TBA
11h15 – 12h00 Pablo Arrighi – Quantum Cellular Automata: structure, universality
14h00 – 14h45 Maarten Van den Nest – Classical simulation complexity of quantum circuits.
14h45 – 15h30 Emmanuel Jeandel – Dynamique des modèles de calcul
16h15 – 16h45 Benoit Valiron – Quipper: a scalable quantum programming language
16h45 – 17h15 Vincent Nesme – Marches quantiques et Bancs de filtres
17h15 – 17h45 Alejandro Diaz-Caro – Vectorial types, non-determinism and probabilistic systems: towards a computational quantum logic
Hans van Ditmarsch (CNRS – LORIA)
Date: Tuesday, March 12 – 14h
Place: A008
Title: Russian Cards, communication protocols, and cryptography
Consider the following logic puzzle, known as ‘Russian Cards’: From a pack of seven known cards 0,1,2,3,4,5,6 Alice and Bob each draw three cards and Eve gets the remaining card. How can Alice and Bob openly (publicly) inform each other about their cards, without Eve learning of any of their cards who holds it? Clearly Alice and Bob are required to devise a secure protocol to communicate their secrets to each other, and Eve is the eavesdropper. Unlike traditional security settings, initially the three agents share knowledge: how many cards there are, and that everybody knows his own cards. This riddle, similar riddles, and things such as schemes for bidding in Bridge, have played a role in the development of information-based cryptography. One original publication (not treating this riddle) is by Fischer & Wright: Bounds on Secret Key Exchange Using a Random Deal of Cards, Journal of Cryptography, 1996. I will present and discuss solutions to the riddle. There are several solutions. I will explain its interest to the dynamic epistemic logic community: various ‘near solutions’ can be described in terms of knowledge, common knowledge, and preservation of knowledge properties after update (typically, making ignorance public can leak information). I will summarily present some recent results more or less in combatorial mathematics. This includes work with my – until recently – collaborators in Sevilla. Some relevant publications: * M.H. Albert, R.E.L. Aldred, M.D. Atkinson, H.P. van Ditmarsch, C.C. Handley, Safe communication for card players by combinatorial designs for two-step protocols. Australasian Journal of Combinatorics, 33:33-46, 2005. * Andrés Cordón-Franco, Hans van Ditmarsch, David Fernández Duque, Joost J. Joosten, Fernando Soler-Toscano. A secure additive protocol for card players. Australasian Journal of Combinatorics 54: 163-175, 2012. * Colleen Swanson, Douglas Stinson. Combinatorial Solutions Providing Improved Security for the Generalized Russian Cards Problem. Designs, Codes and Cryptography; Published online first November 2012.
Benoît Groz (Université de Tel-Aviv)
Date: Friday, March 1 – 14h
Place: A008
Title: Les expressions régulières déterministes
Les expressions régulières déterministes sont utilisées dans plusieurs langages de schémas XML (DTDs, XML Schémas). Nous présentons des techniques permettant de tester en temps linéaire si une expression régulière est déterministe. Nous montrons aussi que l’on peut décider en temps linéaire l’appartenance d’un mot au langage d’une l’expression pour de larges familles d’expressions déterministes. Nos algorithmes construisent des structures de données sur l’arbre de syntaxe de l’expression alors que les algorithmes classiques pour ces problèmes reposent sur la construction de l’automate de Glushkov, une construction quadratique lorsque la taille de l’alphabet n’est pas bornée.
Kumbakonam Govindarajan Subramanian (Prof. at Universiti Sains Malaysia)
Date: Tuesday, February 12 – 14h
Place: A008
Title: Parikh Matrices of Words and Their Properties
Words and their Combinatorial properties constitute a growing area of research that can be classified under Discrete Mathematics. A number of fundamental and significant mathematical results have been revealed in this study. This area can also be considered as a study of discrete sequences or a study of combinatorial properties of free monoids. A word is a finite or an infinite sequence of symbols taken from a finite set, called alphabet. A subword of a given word w is a subsequence of w. In the recent past, the notion of a special matrix, called Parikh Matrix, associated with a word, has been introduced and investigated. This matrix gives information on the number of certain subwords of a given word. This talk is intended to provide some background and highlight interesting results established by researchers in this topic of Parikh matrix. An extension of the notion of a Parikh matrix to picture arrays, which are rectangular arrangements of symbols, will also be indicated.
Etienne Lozes (LSV – Univ. Cassel)
Date: Friday, February 15 – 14h
Place: A008
Title: Preuves de programmes à échanges de messages non copiants dans Heap-Hop
La conception de programmes à échanges de messages peut s’avérer difficile lorsque le coup occasionné par la recopie du message de l’espace d’adressage de l’envoyeur à celui du receveur devient un facteur critique pour l’efficacité du programme. Une solution à ce problème consiste à utiliser un espace d’adressage commun à l’envoyeur et au receveur. Dans cette approche, la recopie peut être évitée, mais la bonne entente entre envoyeur et receveur est nécessaire quant aux accès à la mémoire. Jules Villard et moi-même avons introduit une extension de la logique de séparation qui permet de prouver de tels programmes corrects du point de vue des accès mémoires. Inspiré du langage Sing#, notre système introduit une notion de contrat de canal qui permet de garantir des propriétés plus fortes que celles habituellement établies par la logique de séparation: les programmes prouvés sont garantis sans erreurs de communications (message oublié, message inattendu) et sans fuites mémoires. J’illustrerai ces idées sur l’outil Heap-Hop, et donnerai un aperçu de nos développements en cours sur la théorie des contrats de canaux.
Dominique Unruh (University of Tartu, Tallinn)
Date: Tuesday, February 5 – 14h
Place: A008
Title: Everlasting Security through Quantum Cryptography
Cryptography is a powerful tool for processing confidential data. Cryptographic protocols are, however, only as secure as the underlying encryption schemes. And we do not know whether these might not be broken at some point in the future. This leaves us in a difficult situation: If we process highly sensitive data using cryptographic protocols, an attacker might simply record all messages. Should the underlying encryption scheme be broken in the future, the attacker will then be able to decrypt all confidential data in retrospect. For highly confidential data (such as, e.g., medical records) such a situation is not acceptable. A way out is “everlasting security”. A protocol with everlasting security guarantees that all data involved in the protocol stays secure, even if at some point in the future all the underlying encryption schemes are broken. Unfortunately, with traditional cryptographic techniques, everlasting security can only be achieved in very limited situations. In this talk, we explain how everlasting security can be achieved for a wide variety of tasks by using quantum cryptography, i.e., by making use of quantum mechanical effects in the cryptographic protocol. (No prior knowledge on quantum cryptography is required.)
Stefan Hetzl (INRIA Saclay)
Date: Tuesday, December 18 – 14h
Place: A008
Title: Algorithmic Structuring and Compression of Proofs
This talk will describe a method for the compression of formal proofs by the computation of new lemmas. We use tree grammars for an efficient treatment of the term-structure of a first-order proof. Our algorithm works by first minimizing a tree grammar which, in a second step, induces a formula that serves as abbreviating lemma. Finally, I will outline an approach to extending this work to a method for inductive theorem proving.
Nicolas Pouillard (DemTech project, IT University of Copenhagen (ITU))
Date: Friday, November 9 – 11h
Place: A008
Title: Proving cryptographic schemes in Agda (Dependently typed functional programming for Alice and Bob)
In order to gain confidence in cryptographic schemes and primitive operations, cryptographers use a combination of theories. These theories enable us to mathematically prove security properties of these constructions. Probability theory, game theory, number theory and complexity theory are often all needed at the same time. We aim at realizing these proof arguments in Type Theory using Agda. Our current approach is to carefully use abstractions and dependently typed functional programming techniques to elegantly prove security properties. Finally we aim at reducing the usual gap between the formal description of the construction and its actual implementation. We do so by writing in a functional style supporting extraction into low level circuits.
Sergueï Lenglet (Université de Lorraine, LORIA)
Date: Tuesday, October 23 – 14h
Place: A008
Title: Bisimulations et preuves de congruence dans les langages d’ordre supérieur
Les bisimilarités sont des équivalences qui mettent en relation les termes d’un langage qui ont le même comportement (pour une certaine définition de “même comportement”). Pour prouver la correction d’une bisimilarité, il est nécessaire de prouver qu’elle est une congruence, c.à.d. préservée par contextes : si deux termes sont bisimilaires, alors ils sont toujours bisimilaires une fois placé dans n’importe quel contexte. C’est un problème particulièrement difficile dans les langages d’ordre supérieur, tels le lambda-calcul ou le pi-calcul d’ordre supérieur (dans lequel les messages échangés contiennent des processus exécutables). Pour les langages dérivés du lambda-calcul, il existe une méthode systématique pour prouver la congruence, appelée méthode de Howe. Nous verrons comment adapter la méthode de Howe au pi-calcul d’ordre supérieur. Je mentionnerai aussi rapidement en fin d’exposé mes autres travaux, pour vous donner une idée de mes centres d’intérêts en recherche et éventuellement initier des collaborations au sein du département.
Serdar Erbatur (SUNY Albany)
Date: Tuesday, October 16 – 14h
Place: A008
Title: Unification in Blind Signatures
Blind signatures are signature schemes that keep the content confidential and have applications in modern cryptography for electronic voting and for digital cash schemes. We study unification modulo axioms that originate from RSA implementation of blind signatures. In this talk, we will present one such axiom called E. The axiom E has an undecidable unification problem. It also specifies that a binary operator distributes synchronously over another binary operator, hence called ”synchronous distributivity”. Furthermore, it appears that this is the weakest equational theory which has an undecidable unification problem. Next, we briefly present the axiom F that is similar to E and show that it has a decidable unification problem. This last axiom is not related to blind signatures but satisfies a property of El Gamal encryption.These are only some of many possible axiomatizations of algebraic properties of blinding schemes. The results we obtained will be useful to analyze the protocols with blinding when the algorithms are integrated into the protocol analysis tools.
Mark D. Ryan (University of Birmingham)
Date: Tuesday, July 10 – 11h
Place: A008
Title: Corruption evidence in electronic voting
Coercion-resistance is a fundamental, and strong, property of electronic voting systems. It states that a voter should be able to cast his vote as intended, even in presence of a coercer that may try to force him to cast a different vote. Especially in remote voting, coercion-resistance is extremely hard to achieve and leads either to schemes of questionable usability or to trust assumptions that are difficult to meet in practice. We propose a change of perspective, replacing the requirement of coercion-resistance with a requirement that we call “corruption-evidence”: there should be public evidence of the amount of coercion and other corruption that has taken place during a particular execution of the voting system. Depending on the significance of this amount, election authorities can decide to consider the election valid or not, leading to disincentivisation of coercion.
Barbara Kordy (Université du Luxembourg)
Date: Tuesday, May 15 – 14:00
Place: Amphi C
Title: Attack-Defense Tree Methodology for Security Assessment
Attack-defense trees are a novel methodology for graphical security modeling and assessment. They extend the well established formalism of attack trees, by allowing nodes that represent defensive measures to appear at any level of the tree. This extension enlarges the modeling capabilities of attack trees and makes the new formalism suitable for representing interactions between possible attacks and corresponding defensive measures. This talk will give an overview of the attack-defense tree methodology. After introducing the syntax as well as possible semantics for attack-defense trees, a number of interesting questions, including equivalent representations of security scenarios, quantitative analysis using attack-defense trees and computational complexity of the considered model, will be discussed. Bio: Barbara Kordy is a postdoctoral researcher at the University of Luxembourg. She is currently leading the ATREES project which focuses on modeling and evaluating vulnerability scenarios using attack-defense tree methodology. Her research interests include formal methods for security assessment, privacy and anonymity issues and analysis of security protocols. She received her Ph.D. and master degree in computer science from the University of Orléans in France. She also holds a master degree in mathematics obtained at the University of Silesia in Poland.
Sergueï Lenglet (Université Paris-Diderot, PPS)
Date: Monday, April 23 – 14:00
Place: A008
Title: Bisimulations pour les opérateurs de contrôle délimité
Nous présentons les premiers résultats sur la définition d’une théorie comportementale pour un lambda-calcul étendu avec les opérateurs de contrôle délimité shift et reset. Dans un premier temps, nous définissons une notion d’équivalence comportementale, que nous cherchons ensuite à caractériser à l’aide de bisimulations (applicative et de forme normale). Nous étudions aussi la relation entre équivalence comportementale et une autre équivalence basée sur une transformation CPS.
Markulf Kohlweiss (Microsoft Research Cambridge)
Date: Thursday, April 5 – 14:00
Place: A006
Title: Malleability in Modern Cryptography
In recent years, malleable cryptographic primitives have advanced from being seen as a weakness allowing for attacks, to being considered a potentially useful feature. Malleable primitives are cryptographic objects that allow for meaningful computations, as most notably in the example of fully homomorphic encryption. Malleability is, however, a notion that is difficult to capture both in the hand-written and the formal security analysis of protocols. In my work, I look at malleability from both angles. On one hand, it is a source of worrying attacks that have, e.g., to be mitigated in a verified implementation of the transport layer security (TLS) standard used for securing the Internet. On the other hand, malleability is a feature that helps to build efficient protocols, such as delegatable anonymous credentials and fast and resource friendly proofs of computations for smart metering. We are building a zero-knowledge compiler for a high-level relational language (ZQL), that systematically optimizes and verifies the use of such cryptographic evidence. We recently discovered that malleability is also applicable to verifiable shuffles, an important building block for universally verifiable, multi-authority election schemes. We construct a publicly verifiable shuffle that for the first time uses one compact proof to prove the correctness of an entire multi-step shuffle. In our work, we examine notions of malleability for non-interactive zero-knowledge (NIZK) proofs. We start by defining a malleable proof system, and then consider ways to meaningfully ‘control’ the malleability of the proof system. In our shuffle application controlled-malleable proofs allow each mixing authority to take as input a set of encrypted votes and a controlled-malleable NIZK proof that these are a shuffle of the original encrypted votes submitted by the voters; it then permutes and re-randomizes these votes and updates the proof by exploiting its controlled malleability. Short Bio: I am a postdoc at Microsoft Research Cambridge in the Programming Principles and Tools group. I did my PhD at the COSIC (Computer Security and Industrial Cryptography) group at the K.U.Leuven, and my master thesis at IBM Research Zurich. My research focus is on privacy-enhancing protocols and formal verification of cryptographic protocols.
Paul Gibson (Telecom & Management SudParis)
Date: Friday, April 6 – 14:00
Place: A008
Title: Composing objects and services: a formal model driven development (MDD) approach
In this talk, we will present the problem of composing systems and reasoning about the correctness of such compositions. We will present semantics linking objects and services, and discuss the use of formal methods during MDD of sysems of services. Different formalisms will be used to highlight the important issues, but we will focus on our on-going research using Event-B to model requirements, refined into designs, that are to be transformed into OO code (with concrete examples in Java).
David Delahaye (CNAM)
Date: Friday, March 23 – 14:00
Place: A008
Title: Tableaux modulo théories en utilisant la superdéduction : une application à la vérification de règles de preuve B avec l’outil de démonstration automatique Zenon
Nous proposons une méthode qui permet de développer les tableaux modulo théories en utilisant les principes de la superdéduction, parmi lesquelles la théorie est utilisée pour enrichir le système de déduction avec de nouvelles règles de déduction. Cette méthode est présentée dans le cadre de l’outil de démonstration automatique Zenon, et nous décrivons un exemple d’application à la théorie des ensembles de la méthode B. Cela permet de fournir un outil de démonstration alternatif à celui de l’Atelier B, qui peut être utilisé pour vérifier les règles de preuve B en particulier. Nous proposons également des benchmarks dans lesquels cet outil de démonstration est capable de vérifier automatiquement une partie des règles provenant de la base de règles ajoutées maintenue par Siemens IC-MOL.
Martin Gagné (VERIMAG, Université Joseph Fourier)
Date: Tuesday, March 20 – 14:00
Place: A008
Title: Towards Automated Security Proofs of Block-Cipher Based Cryptographic Primitives
A block cipher algorithm (e.g. AES, Blowfish, DES, Serpent and Twofish) is a symmetric key algorithm that takes a key, a fixed size input message block and produces a fixed size output block. Block ciphers are among the most ubiquitous algorithms in symmetric cryptography, and can be used to construct encryption schemes, message authentication codes, collision-resistant hash functions, and many other primitives. As with any other cryptographic construction, we require that these primitives be proven secure, usually through a reduction of the security of the primitive to the security of the block cipher. Since block-cipher based primitives are usually constructed using a fairly small set of operations, we believe that Hoare logic is an ideal tool for automatically proving their security. We present a Hoare logic for proving the security of block cipher modes of operation. Our logic is fairly simple, requires only three invariants, and can be used to prove the security of all the common modes of operation. We also present a second Hoare logic for proving the security of message authentication codes. This logic is much more complicated than the first, and we discuss the different challenges posed by this primitive.
Emmanuel Jeandel (LIF, Université de Provence)
Date: Monday, March 19 – 14:00
Place: A008
Title: Modèles de calcul non initialisés et Systèmes Dynamiques
Dans cet exposé, on considère les modèles de calcul usuels (machines de Turing, machines à compteurs) comme des systèmes dynamiques, correspondant à l’application de leur fonction de transition sur une configuration quelconque. En particulier, dans ce contexte, il est nécessaire d’observer leur comportement partant de n’importe quelle configuration, plutôt que d’une configuration initiale donnée. Le problème devient alors totalement différent. Par exemple, la preuve de l’indécidabilité de l’existence d’une configuration sur laquelle une machine de Turing ne s’arrête pas (Hooper, 1966) est bien plus délicate que l’indécidabilité de l’arrêt d’une machine de Turing partant d’une configuration donnée (Turing, 1937), et ce phénomène se répète pour la majorité des modèles de calcul. Après une présentation générale de la problématique, nous essaierons dans cet exposé d’illustrer les méthodes permettant de résoudre ou contourner le problème dans le cadre du modèle de calcul des pavages par tuiles de Wang, qui convient fort bien à un traitement dynamique.
Christophe Calvès (INRIA Lille-Nord Europe)
Date: Tuesday, March 13 – 14:00
Place: A008
Title: Specifying Kermeta through Rewriting Logic
Kermeta is a Domain-Specific Language designed as a kernel for metamodel engineering. It unifies metamodeling, constraints, semantics and transformations into a coherent and statically typed language. In a nutshell, it has an object-oriented base similar to Java but with handling of metamodeling aspects such as attributes, associations, multiplicity, … and a “design by contract” approach in the style of Eiffel. Kermeta enable users to give a semantics to their metamodel. Unfortunately, even if Kermeta is a mature software, it has not been(formally) specified yet. This lack prevents any (formal) verification on metamodel’s semantics. In this talk we will present a specification of Kermeta’s semantics through the formal definition of a compiler from Kermeta to an object-oriented stack-based abstract machine. Both the compiler and the abstract machine are defined in the specification framework K, based on rewriting logic. We will show the problems raised by Kermerta’s particularities (such as attribute/reference handling, design by contract, multiple inheritance, genericity, …), how to formalize them and compile them correctly.
Marian Baroni (University ’Dunarea de Jos’ of Galati, Romania)
Date: Thursday, December 8 – 16:00
Place: A006
Title: A Bishop-style constructive theory of posets
Partially ordered sets are examined from the point of view of Bishop’s constructive mathematics, which can be viewed as the constructive core of mathematics and whose theorems can be translated into many formal systems of computable mathematics. The features of Bishop’s constructivism are illustrated by a simple example, the least-upper-bound principle. Unlike the classical case, this statement does not hold constructively. However, the order completeness of the real number set can be expressed constructively by an equivalent condition for the existence of supremum, a condition of order locatedness which is vacuously true in the classical setting. A generalization of this condition leads to a reasonable definition of order completeness for an arbitrary partially ordered set.
Antoine Taveneaux (Université Paris Diderot, LIAFA)
Date: Tuesday, December 6 – 14:00
Place: A006
Title: Calculabilité et mathématiques effectives
Dans de nombreux théorèmes mathématiques, le quantificateur existentiel n’est pas “effectif”, dans le sens où l’objet dont on prouve l’existence n’est pas calculable. La théorie de la calculabilité porte principalement sur la comparaison des ensembles d’entiers, du point de vue de leur calculabilité mutuelle. Nous verrons que l’on peut aussi comparer l’effectivité des théorèmes, et faire en quelque sorte de la “calculabilité sur les théorèmes”. Cet exposé sera fait en français et ne suppose presque aucun prérequis en théorie de la calculabilité.
Olivier Pereira (UCL Crypto Group, Louvain-la-Neuve)
Date: Monday, November 28 – 10:00
Place: A008
Title: Running Mixnet-Based Elections with Helios
The Helios voting system is an open-audit web-based voting system that has been used by various institutions in real-stake elections during the last few years. While targeting the simplicity of the election workflow, the homomorphic tallying process used in Helios limits its suitability for many elections (large number of candidates, specific ballot filling rules…). We present a variant of Helios that allows an efficient mixnet-based tallying procedure, and document the various choices we made in terms of election workflow and algorithm selection. In particular, we propose a modified version the TDH2 scheme of Shoup and Gennaro that we found particularly suitable for the encryption of the ballots. Our Helios variant has been tested during two multi-thousand voter elections. The lessons taken from the first of these elections motivated some changes into our procedure, which have been successfully experimented during the second election. This is joint work with Philippe Bulens and Damien Giry.
Roberto Giacobazzi (Univ. Verona, Italy)
Date: Tuesday, November 15 – 10:30
Place: B011
Title: Obfuscation by Partial Evaluation of Distorted Interpreters
Joint work with Neil D. Jones and Isabella Mastroeni. How to construct a general program obfuscator? We present a novel approach to automatically generating obfuscated code P’ from any program P with source clear code. Start with a (program-executing) interpreter Interp for the language in which P is written. Then “distort” Interp so it is still correct, but its specialization P’ with respect to P is transformed code that is equivalent to the original program, but harder to understand or analyze. Potency of the obfuscator is proved with respect to a general model of the attacker, modeled as an approximate (abstract) interpreter. A systematic approach to distortion is to make program P obscure by transforming it to P’ on which (abstract) interpretation is incomplete. Interpreter distortion can be done by making residual in the specialization process sufficiently many interpreter operations to defeat an attacker in extracting sensible information from transformed code. Our method is applied to: code flattening, data-type obfuscation, and opaque predicate insertion. The technique is language independent and can be exploited for designing obfuscating compilers.
Vincent Cheval (LSV, Cachan)
Place: B011
Title: A decision procedure for trace equivalence
We consider security properties of cryptographic protocols that can be modeled using the notion of trace equivalence. The notion of equivalence is crucial when specifying privacy- type properties, like anonymity, vote-privacy, and unlinkability. In this paper, we give a calculus that is close to the applied pi calculus and that allows one to capture most existing protocols that rely on classical cryptographic primitives. First, we propose a symbolic semantics for our calculus relying on constraint systems to represent infinite sets of possible traces, and we reduce the decidability of trace equivalence to deciding a notion of symbolic equivalence between sets of constraint systems. Second, we develop an algorithm allowing us to decide whether two sets of constraint systems are in symbolic equivalence or not. Altogether, this yields the first decidability result of trace equivalence for a general class of processes that may involve else branches and/or private channels (for a bounded number of sessions).
Alexander Shen (CNRS, LIRMM, Montpellier)
Date: Tuesday, October 18 – 14:00
Place: C005 (salle du conseil)
Title: Kolmogorov complexity and “no information” notion
There are many ways to formalize the intuitive idea of “a message that has no information” or “a message has full information” about something. One can use logic of knowledge, Shannon’s information theory, consider protocols (deterministic, non-deterministic, probabilistic etc.). We discuss these approaches and then switch to the algorithmic information theory approach and explain A. Muchnik’s result about (kind of) cryptography framework in algorithmic information theory.
Daniel Leivant (Indiana University)
Date: Friday, June 10 – 14:00
Place: C103
Title: Dynamic logic and its parents
Modal deductive formalisms about imperative programs, such as dynamic logic, can be seen as syntactic sugar for more explicit forms of reasoning, such as first order theories and second order logic. This view suggests notions of completeness for dynamic logic, which we argue are more informative and appropriate than the traditional notion of relative completeness.
Frederic Prost (LIG, Grenoble)
Date: Wednesday, May 11 – 15:30
Place: B013
Title: Politique de sécurité dynamique
La propriété de non-interférence qui consiste à prouver que deux parties d’un programme n’ont pas d’interactions entre elles, est la base de nombreuses analyses de programme : parallélisation, program slicing, élimination de code mort, etc. Dans le domaine de la confidentialité il s’agit de montrer qu’il n’y a pas de flux d’information du “privé” vers le “public”. Cette propriété est cependant trop stricte pour rendre compte des applications réelles : par exemple lors de la vérification d’un mot de passe il y a formellement une interaction entre le domaine privé (la valeur du mot de passe) et le domaine public (l’accès est autorisé ou non en regard du code entré). Cependant de telles fuites d’information sont considérées comme acceptables. Nous introduisons une notion de politique de confidentialité sous forme de règles de réécritures associées (on pourrait voir ça comme un typage plus fin) pour spécifier que de telles fuites sont “permises” et vérifier qu’un programme est bien correct vis-à-vis d’une politique de sécurité. De même après trois essais infructueux une carte banquaire est bloquée, nous montrons comment en ajoutant des actions aux règles définissants la politique de sécurité on peut modifier cette dernière au cours du calcul et comment vérifier que le programme reste valide vis-à-vis de sa politique de sécurité.
Vasileios Koutavas (Trinity College, Dublin))
Date: Tuesday, May 10 – 10:00
Place: A008
Title: An Introduction to Program Equivalence
Contextual equivalence is a relation between program terms which guarantees that related terms can replace eachother inside any arbitrary system, without changing the behaviour of the system. This relation reveals fundamental symmetries of program terms that abstract away from concrete syntax and illuminate the semantics of programming languages. Its implications span from programming language design to software engineering, to system security, to program verification. To understand contextual equivalence, and to develop reasoning principles for it, we need to develop a theory that can express the basic interractions between a program and its environment, and to express contextual equivalence within that theory. In this talk we will review the theories available today and then concentrate on thosed based on bisimulation, which we will examine for a variety of functional, imperative, and nondeterministic languages.
Marie de Roquemaurel (IRIT, Université de Toulouse III)
Date: Tuesday, April 5 – 14:00
Place: A006
Title: Assistance à la conception de modèles à l’aide de contraintes
Les systèmes informatiques embarqués deviennent omniprésents dans de multiples domaines et de plus en plus d’industriels cherchent à modéliser leurs problèmes réels, à vérifier la cohérence de leur modélisation, et à utiliser des outils performants pour les résoudre. Dans ce travail, nous considérons les problèmes modélisés avec les outils utilisés actuellement par la communauté de l’ingénierie dirigée par les modèles comme le framework EMF (Eclipse Modeling Framework) pour représenter le système informatique embarqué ou le langage de contraintes OCL (Object Constraint Language) pour traduire les propriétés métier. Pour vérifier ces propriétés de manière efficace, l’approche choisie consiste à transformer le problème en un problème de satisfaction de contraintes et d’utiliser les techniques efficaces d’Intelligence Artificielle pour le résoudre. Les premiers résultats expérimentaux sont prometteurs. Lors de cet exposé, je présenterai cette approche et nous verrons qu’elle présente l’avantage de pouvoir faire de la validation de contraintes, de compléter des modèles partiels tout en pouvant prendre en compte un critère à minimiser. Mots-clés : Conception Système, Ingénierie Dirigée par les Modèles, Transformation de Modèles, Synthèse, Contraintes
Gennara Parlato (CNRS, LIAFA, Paris)
Date: Tuesday, February 15 – 14:00
Place: A008
Title: Decidable Logics Combining Heap Structures and Data
We define a new logic, STRAND, that allows reasoning with heap-manipulating programs using deductive verification and SMT solvers. STRAND logic (“STRucture ANd Data” logic) formulas express constraints involving heap structures and the data they contain; they are defined over a class of pointer-structures R defined using MSO-defined relations over trees, and are of the form ∃x∀y.φ(x, y), where φ is a monadic second-order logic (MSO) formula with additional quantification that combines structural constraints as well as data-constraints, but where the data-constraints are only allowed to refer to x and y. The salient aspects of the logic are: (a) the logic is powerful, allowing existential and universal quantification over the nodes, and complex combinations of data and structural constraints; (b) checking Hoare-triples for linear blocks of statements with pre-conditions and post-conditions expressed as Boolean combinations of the existential and universal STRAND formulas reduces to satisfiability of a STRAND formula; (c) there are powerful decidable fragments of STRAND, one semantically defined and one syntactically defined, where the decision procedure works by combining the theory of MSO over trees and the quantifier-free theory of the underlying data-logic. We demonstrate the effectiveness and practicality of the logic by checking verification conditions generated in proving properties of several heap-manipulating programs, using a tool that combines an MSO decision procedure over trees (MONA) with an SMT solver for integer constraints (Z3).
Marco Gaboardi (Università di Bologna)
Date: Tuesday, February 8 – 14:00
Place: A008
Title: Linear Dependent Types for Certified Resource Consumption
I present the ideas underlying the design of an interactive programming framework useful to ensure the reliability and correctness of programs with respect to their resource consumption. The key ingredient of this framework is a type system, dlPCF, mixing ideas from linear logic and dependent types. I introduce dlPCF and I show that it is sound and relatively complete. Completeness holds in a strong sense: dlPCF is not only able to precisely capture the functional behaviour of programs (i.e. how the output relates to the input) but also some of their intensional properties, namely the complexity of their evaluation. Additionally, dlPCF is parametrized on the underlying language of index terms, which can be tuned so as to sacrifice completeness for tractability.
Steve Kremer (INRIA, LSV, Cachan)
Date: Tuesday, January 25 – 14:00
Place: A008
Title: Formal analysis of security protocols: the case of electronic voting
Security protocols are distributed programs which aim at establishing security guarantees, such as confidentiality, authentication or anonymity, while being executed over an untrusted network. Nowadays, security protocols are present in many aspects of our everyday life: electronic commerce, mobile phones, e-banking, etc. Symbolic methods, which often come with automated tool support, have shown extremely useful for finding structural flaws in such protocols or proving their absence. In this talk we report on our efforts carried out over the last years to apply such techniques to electronic voting protocols. The specificities of electronic voting have raised several challenges: they make use of esoteric cryptographic primitives and need to guarantee complex security properties, such as receipt-freeness or verifiability. We have modelled and analyzed such protocols and their properties in the applied pi calculus, a formal language for expressing security protocols. Formally defining the protocols and properties allowed us to highlight many (hidden) assumptions and has shown to be a non trivial task in itself. Regarding automated analysis, this line of work raised a number of challenges in formal analysis of security protocols, some of which are still the subject of active research.
Florian Horn (CNRS, LIAFA , Paris)
Date: Tuesday, January 18 – 14:00
Place: A008
Title: Solving Simple Stochastic Tail Games
Stochastic games are a natural model for open reactive processes: one player represents the controller and his opponent represents a hostile environment. The evolution of the system depends on the decisions of the players, supplemented by random transitions. There are two main algorithmic problems on such games: computing the values (quantitative analysis) and deciding whether a player can win with probability 1 (qualitative analysis). In this paper we reduce the quantitative analysis to the qualitative analysis: we provide an algorithm for computing values which uses qualitative analysis as a sub-procedure. The correctness proof of this algorithm reveals several nice properties of perfect-information stochastic tail games, in particular the existence of optimal strategies.
Place: A008
Title: TBD
Julie Cailler (University of Regensburg)
Date: Thursday, March 14th 2024 – 13h00
Place: C005, visio
Title: Conception d’un prouveur automatique basé sur les tableaux analytiques et production de preuves vérifiables
La déduction automatique est l’utilisation de programmes informatique afin d’automatiquement prouver des théorèmes mathématiques. Elle trouve son intérêt dans la détection de bogues au sein de systèmes critiques ou encore dans l’aide à la démonstration de preuves mathématiques. Cet exposé se concentre sur la présentation de Goéland, un prouveur de théorèmes automatique concurrent, et sur les principaux défis qu’il rencontre. Ces défis incluent l’implémentation d’une procédure de recherche de preuves équitable basée sur la méthode des tableaux en logique du premier ordre, la prise en compte du raisonnement au sein des certaines théories (l’égalité, la théorie des ensembles, etc), ainsi que la génération de preuves vérifiables par des outils externes (Coq, Lambdapi).
Jawher Jerray (Télécom Paris)
Date: Tuesday, March 12th 2024 – 13h00
Place: B013
Title: Model-based High-level Integration of Heterogeneous Components for co-simulation
Because of their complexity, embedded systems are designed with sub-systems or components taken in charge by different development teams or entities and with different modeling frameworks and simulation tools, depending on the characteristics of each component. Unfortunately, this diversity of tools and semantics makes the integration of these heterogeneous components difficult. Thus, to evaluate their integration before their hardware or software is available, one solution would be to merge them into a common modeling framework. Yet, such a holistic environment supporting many computation and computation semantics seems hard to settle. Another solution we investigate is to generically link their respective simulation environments in order to keep the strength and semantics of each component environment. We present a method to simulate heterogeneous components of embedded systems in real-time. These components can be described at any abstraction level. Our main contribution is a generic glue that can analyze in real-time the state of different simulation environments and accordingly enforce the correct communication semantics between components.
Claude Stolze (ENS Saclay)
Date: Tuesday, March 5th 2024 – 13h00
Place: B013
Title: Composable Partial Multiparty Session Types
Because of their complexity, embedded systems are designed with sub-systems or components taken in charge by different development teams or entities and with different modeling frameworks and simulation tools, depending on the characteristics of each component. Unfortunately, this diversity of tools and semantics makes the integration of these heterogeneous components difficult. Thus, to evaluate their integration before their hardware or software is available, one solution would be to merge them into a common modeling framework. Yet, such a holistic environment supporting many computation and computation semantics seems hard to settle. Another solution we investigate is to generically link their respective simulation environments in order to keep the strength and semantics of each component environment. We present a method to simulate heterogeneous components of embedded systems in real-time. These components can be described at any abstraction level. Our main contribution is a generic glue that can analyze in real-time the state of different simulation environments and accordingly enforce the correct communication semantics between components.
Laetitia Laversa (Université Sorbonne Paris Nord, LIPN)
Date: Monday, February 19th 2024 – 13h00
Place: B013 (Bob)
Title: Communicating automata and k-synchronizability
Distributed systems are ubiquitous and their implementation is complex and error-prone. In order to check for errors, they can be modeled as systems of communicating automata, where each automaton represents the behavior of an element of the system. In general, verification problems are undecidable in such a model. The use of (under or over) approximations is necessary. This talk presents k-synchronizable systems, which are a subclass of systems of communicating automata, and some results on them.
Geoff Sutcliffe (University of Miami)
Date: Friday, June 23rd 2023 – 14h00
Place: C103
Title: The TPTP World – Infrastructure for Automated Reasoning
The TPTP World is a well known and established infrastructure that supports research, development, and deployment of Automated Theorem Proving (ATP) systems for classical logics. The data, standards, and services provided by the TPTP World have made it increasingly easy to build, test, and apply ATP technology. This talk and tutorial reviews the core features of the TPTP World, describes key service components of the TPTP World and how to use them, and presents some successful applications.
Bernadette Charron-Bost (Département Informatique de l’ENS)
Date: Tuesday, May 23rd 2023 – 14h00
Place: C103
Title: Computable Functions in Anonymous Networks
In this talk, we present several computability results in anonymous networks with broadcast communications. First, we recall the characterization, given by Boldi and Vigna, of the computable functions when agents have no information on their outgoing neighborhoods. Then we characterize the class of computable functions in networks with either (a) output port awareness, (b) bidirectional links, or (c) outdegree awareness: In each case, we prove that a function can be computed if and only it is frequency-based, namely, its value only depends on the frequencies of the different input values. This characterization holds for both exact and approximate computability. Our approach relies on the notion of graph fibration, and the key to our positive result is a distributed algorithm that computes the minimum base of the network.
In the second part of the talk, we tackle the setting of dynamic networks and present a quite different approach based on the popular Push-Sum algorithm: When an upper bound on the network size is available, we provide a more tractable algorithm for computing frequency-based functions, which can cope with dynamic network topology changes.
Jawher Jerray (Télécom Paris)
Date: Wednesday, May 3rd 2023 – 14h00
Place: B013
Title: Guaranteed properties of dynamical systems under perturbations
Since dynamical systems has a major impact on human development, especially critical systems that can put human lives at risk if something goes wrong. Hence, the need of studying the behavior of these systems in order to guarantee their correct functioning. Nevertheless, computing such type of system has never been an easy task, as the complexity of these systems is constantly increasing, in addition to the perturbations that may arise during their performance, as well as undefined parameters that may exist. To ensure that a system always produces the expected results and does not fail in any way, a formal verification of its behavior and properties is necessary. In this work, we study the schedulability of the flight control of a space launcher with unknown parameters and under constraints. Then, we propose a synthesis of the admissible timing values of the unknown parameters by a parametric timed model checker. We increase the complexity of the problem by taking into consideration the switch time between two threads. We extend this work by developing a tool that translates a given real-time system design into parametric timed automata in order to infer some timing constraints ensuring schedulability.
Johannes Mueller (Université du Luxembourg)
Date: Wednesday, April 5 2023 – 10h00
Place: B013
Title: Towards End-to-End Formal Verification of Voting Protocols
Arnaud Spiwack (Tweag)
Date: Thursday, May 12 2022 – 11h00
Place: B013
Title: Linear Haskell
Since 2016, I’ve been leading the effort to supplement the
functional programming language Haskell with linear typing (in
the sense of linear logic). That is you can write a type of
functions which are allowed to use their arguments just once. The
first iteration of this was released as part of GHC 9.0.
This may seem like a curious property to require of a
function. Originally, linear logic was motivated by
proof-theoretic consideration. At first it appeared as a natural
decomposition of the coherence-space models of classical logic,
but it does have far reaching proof-theoretical
considerations. I’m one to take the connection between proof
theory and programming languages (the Curry-Howard
correspondence) quite seriously. Linear logic has almost
immediately been seen, from a programming language standpoint, as
giving a way to model resources in types. But what this
concretely means is not super clear. In this talk I will describe
the sort of practical benefits that we expect from linear types
in Haskell today. They are, in particular, related to Rust’s
ownership typing, though I don’t know whether I will have time to
explain this in detail. At any rate, I am not going to spoil the
entire talk here. I’d do a bad job of it in these handful of
lines, anyway.
Souheib BAARIR (Sorbonne Université – LIP6)
Date: Friday, April 29 2022 – 13h00
Place:
Title: Contributions to the analysis of discrete systems through proportional satisfiability
Despite its NP-Completeness, propositional satisfiability
(SAT) covers a broad spectrum of applications. Nowadays, it is an
active research area finding its applications in many contexts:
planning decision, hardware and software verification, cryptology,
computational biology, etc. Hence, the development of approaches that
could handle increasingly challenging SAT problems has become a
focus. In this presentation, I will discuss two directions that
contribute to the improvement of SAT solving: namely, the exploitation
of symmetries and parallelism.
Alexandre Debant (Inria Nancy – Grand Est)
Date: Tuesday, April 6 2021 – 14h00
Place:
Title: Formal verification of security protocols – how to prove the physical proximity of the participants?
Security protocols are short distributed programs designed to achieve various security goals such as privacy and authenticity, even if the communications between participants occur on compromised or insecure channels, like the Internet. To achieve these goals, protocols usually rely on cryptographic primitives, like encryption, signature or hash functions, and tools have been developed to automatically prove their security.
Unfortunately, with the rise of the RFID or NFC technologies, new security protocols have been designed to ensure new security requirements, and existing tools do not apply to prove their security.
For instance, regarding the contactless payment protocols or the keyless systems, a new security goal is to ensure the physical proximity of the card to the reader to avoid relay attacks. To do so, the protocols, called distance-bounding protocols, rely on physical properties, like the transmission delay or the network topology, which are usually ignored in existing models and tools.
In this talk I will present a framework which makes the automatic analysis of these protocols possible.
Luigi Liquori (Inria Sophia-Antipolis)
Date: Wednesday, February 5 2020 – 11h00
Place: B013
Title: Why reductions have to be synchronised in intersection (and union) typed lambda-calculi
We present the ∆-calculus, an explicitly typed λ-calculus with strong pairs, projections and explicit type coercions.
The calculus can be parametrized with different intersection type theories, as described in the Barendregt-Dekker-Statman book on λ-calculi with types, producing a family of ∆-calculi with related intersection typed systems. We show why annotating pure λ-calculus with intersection types is not easy: a classical example is the difficulty to decorate the bound variable of the explicitly typed polymorphic identity λx:?.x such that the type of the identity is (σ → σ) ∩ (τ → τ ): previous attempts showed that the full power of the intersection type discipline can be easily lost. We show why intersection typed systems need a kind of synchronised reduction to fix the subject reduction theorem. The same problematics also appear when decorating λ-calculus with union-types. Finally, we show how the ∆-calculus can be raised to a ∆-framework by adding dependent-types as in the Edinburgh Logical Framework.
Raphaëlle Crubillé
Date: Wednesday, January 22 2020 – 11h00
Place: C005
Title: On the Versatility of Open Logical Relations: Continuity,
Automatic Differentiation, and a Containment Theorem
I will present a joint work with Barthe, Dal Lago and Gavazzo.
Logical relations are one of the most powerful techniques in the theory of programming languages, and have been used extensively for proving properties of a variety of higher-order calculi. However, there are properties that cannot be immediately proved by means of logical relations, for instance program continuity and differentiability in higher-order languages extended with real-valued functions. Informally, the problem stems from the fact that these properties are naturally expressed on terms of non-ground type (or, equivalently, on open terms of base type), and there is no apparent good definition for a base case (i.e. for closed terms of ground types). To overcome this issue, we introduce a generalization of the concept of a logical relation, which we dub open logical relation, and prove that it can be fruitfully applied in several contexts in which the property of interest is about expressions of first-order type. Our setting is a simply-typed λ-calculus enriched with real numbers and real-valued first-order functions from a given set, such as the one of continuous or differentiable functions. We first prove a containment theorem stating that for any such a collection of functions including projection functions and closed under function composition, any well-typed term of first-order type denotes a function belonging to that collection. Then, we show by way of open logical relations the correctness of the core of a recently published algorithm for forward automatic differentiation. Finally, we define a refinement-based type system for local continuity in an extension of our calculus with conditionals, and prove the soundness of the type system using open logical relations.
Sophie Tourret (MPI-Inf Saarbrücken)
Date: Tuesday, December 3 2019 – 10h30
Place: A006
Title: Stronger Higher-order Automation
Automated reasoning in first-order logic (FOL) is becoming a mature research domain. It has led to the development of powerful tools such as superposition-based theorem provers and SMT solvers (Satisfiability Modulo Theory solvers), that have found and continue to find many applications in industry and research.
One such application is the automation of proofs in interactive theorem proving (ITP), where proof assistants are used to write computer-checked proofs of theorems, generally expressed in a variant of higher-order logic (HOL). This automation is realised via “hammers”, that attempt to delegate the proof obligations to first-order automated provers. However, in the translation from HOL to FOL, hammers obfuscate the structure of terms and formulas through the application of a sequence of encodings, although it is this very structure that the provers exploit to find proofs efficiently.
This situation is less than ideal, and if until a few years ago, the ITP and ATP communities were working in parallel, mostly ignoring each other, there is nowadays a trend pushing to bring the two communities closer. The work that I will present in this talk is part of this trend. It includes ongoing research that is either improving higher-order automation with respect to ITP applications or using ITP as a vehicle for ATP research.
Fabian Reiter (LSV, Cachan)
Date: Friday, October 19 2018 – 14h00
Place: B013
Title: Descriptive distributed complexity
This talk connects two classical areas of theoretical computer science: descriptive complexity and distributed computing. The former is a branch of computational complexity theory that characterizes complexity classes in terms of equivalent logical formalisms. The latter studies algorithms that run in networks of interconnected processors.
Although an active field of research since the late 1970s, distributed computing is still lacking the analogue of a complexity theory. One reason for this may be the large number of distinct models of distributed computation, which make it rather difficult to develop a unified formal framework. In my talk, I will outline how the descriptive approach, i.e., connections to logic, could be helpful in this regard.
Sergueï Lenglet (Loria, UL)
Date: Tuesday, October 16 2018 – 10h00
Place: A008
Title: HOpi in Coq : Locally Nameless vs Nominal
I will present what I did during my sabbatical in Rennes : the formalization of a higher-order process calculus, i.e., a process algebra where messages contain executable processes. I will discuss in particular the problem of representing binders, and compare existing techniques (locally nameless and nominal) in that setting.
Cristóbal Rojas (Universidad Andres Bello, Santiago, Chile)
Date: Tuesday, September 18 2018 – 14h00
Place: C103
Title: Computability and complexity of Julia sets
Julia sets of quadratic maps of the form z²+c for complex parameters c are among the most well-studied objects in dynamical systems. In particular, countless computer programs have been designed to visualize their structure. In 2006, Braverman and Yampolsky showed in a surprising result that there exist efficiently computable parameters c for which the Julia set cannot be computed by any algorithm. In this talk we will introduce the basic dynamical properties of quadratic maps of the complex plane and present several results on computability and computational complexity of their Julia sets. No previous knowledge in dynamical systems will be assumed.
Cezary Kaliszyk (University of Innsbruck)
Date: Tuesday, June 12 2018 – 15h00
Place: C103
Title: Typed set theory as an Isabelle object logic
One of the main goals of the Mizar project has been to create a formal system that would be attractive for mathematicians. As such it differs in many aspects from other all other systems, including its foundations, the type system or the input language. In this talk we present a combination of Isabelle – a modern logical framework – with a Mizar object logic and argue that it can serve as an attractive environment for formal mathematics. We will first specify the Mizar foundations, which are a variant of set theory extended with a type system corresponding to how mathematicians classify objects. We will discuss mathematical structures and their inheritance, including multiple inheritance. Finally we will compare the proof styles of Mizar and Isabelle to natural proofs.
Fahad R. Golra (LORIA & UL)
Date: Tuesday, April 10 2018 – 14h00
Place: B013
Title: Bridging the gap between formal and informal models in software and system development projects
Formal methods serve as a backbone for the software and system development projects that require high level of accuracy. However, the client perspective of the future system is presented in an informal requirements document. This gap between formal and informal approaches becomes the weakest link of the chain for such development projects. The goal is to bridge this gap through a fine-grained level of traceability between the client-side informal requirements document to the developer-side formal
specifications using a semi-formal modeling technique, model federation. This fine-grained level of traceability can be exploited by the requirements engineering process for performing different actions that involve either or both these informal and formal artifacts. In this talk, I would like to present the model
federation approach, and its application for linking textual requirements with Event-B machines. I will also present some future perspectives concerning the links between informal and formal approaches of software development.
Keywords: System specification, Requirements modeling, Model federation, Traceability, Requirements refinement
Thomas Chatain (LSV & ENS Cachan)
Date: Monday, March 19 2018 – 14h00
Place: A008
Title: Variations on alignments in process mining
Conformance checking techniques asses the suitability of a process model in representing an underlying process, observed through a collection of real executions. One important metric in conformance checking is to asses the precision of the model with respect to the observed executions, i.e., characterize the ability of the model to produce behavior unrelated to the one observed. By avoiding the computation of the full state space of a model, current techniques only provide estimations of the precision metric, which in some situations tend to be very optimistic, thus hiding real problems a process model may have. In this paper we present the notion of anti-alignment as a concept to help unveiling traces in the model that may deviate significantly from the observed behavior. Using anti-alignments, current estimations can be improved, e.g., in precision checking. We show how to express the problem of finding anti-alignments as the satisfiability of a Boolean formula, and provide a tool which can deal with large models efficiently.
Arthur Charguéraud (Inria & ICube, Université de Strasbourg)
Date: Tuesday, February 20 2018 – 15h00
Place: A008
Title: Verification of Imperative Programs using CFML
CFML is an interactive tool that supports modular verification of higher-order, imperative programs. It is entirely implemented inside the Coq proof assistant. CFML leverages two key ingredients. First, it relies on an embedding in Separation Logic to describe the mutable state. Second, it relies on the construction of Characteristic Formulae for smoothly integrating the process of verifying in Coq the code line by line. CFML has been applied to verify classical data structures and algorithms, including binary search trees, hashtables, finger trees, catenable and splittable chunk sequences, Dijkstra’s shortest path algorithm, depth first search, vectors, Eratosthenes’ sieve, and Union-Find. Furthermore, CFML has been recently extended to support amortized complexity analysis.
In this talk, I will give a tour of CFML, covering both its implementation and its use in practice. Experience with Coq is not required to follow the talk.
Martin Schüle (Faculté des sciences appliquées de Zürich)
Date: Thursday, December 7 2017 – 14h00
Place: B013
Title: Elementary Cellular Automata and the Nature of Computation
Computational notions abound in science. Especially in the fields of biology and neuroscience computational capacities are often ascribed to entities and processes. Yet it is not clear what kind of “computation” is at work there. Can it still be treated within traditional computational theory or is some other more general or vague notion of computation needed? An answer is formulated in the context of cellular automata theory: it is shown that elementary cellular automata fulfill certain dynamical system properties in order to show computational capacities. These findings are then generalized to a generic notion of natural computation.
Ross Duncan (Univ. of Strathclyde, Glasgow)
Date: Tuesday, June 13 2017 – 14h00
Place: A008
Title: Verifying Quantum Computations, Graphically.
Quantum computers are coming soon, and with them software whose functioning depends on the principles of quantum mechanics. Unfortunately, the “logic” which underlies quantum mechanics is very different to the logics usually employed by computer scientists for the verification of software. How then can we verify the correctness of quantum software? In this talk I will present the ZX-calculus, a formal graphical language based on the algebraic structure of quantum theory, and demonstrate how it can be used to verify quantum computations with the help of an automated rewrite system.
Vijay Ganesh (Univ. of Waterloo, Canada)
Date: Thursday, April 20 2017 – 14h00
Place: A006
Title: On The Unreasonable Effectiveness of Boolean SAT Solvers.
Modern conflict-driven clause-learning (CDCL) Boolean SAT solvers routinely solve very large industrial SAT instances in relatively short periods of time. This phenomenon has stumped both theoreticians and practitioners since Boolean satisfiability is an NP-complete problem widely believed to be intractable. It is clear that these solvers somehow exploit the structure of real-world instances. However, to-date there have been few results that precisely characterize this structure or shed any light on why these SAT solvers are so efficient. In this talk, I will present results that provide a deeper empirical understanding of why CDCL SAT solvers are so efficient, which may eventually lead to a complexity-theoretic result. Our results can be divided into two parts. First, I will talk about structural parameters that can characterize industrial instances and shed light on why they are easier to solve even though they may contain millions of variables compared to small crafted instances with hundreds of variables. Second, I will talk about internals of CDCL SAT solvers, and describe why they are particularly suited to solve industrial instances. Brief Bio: Dr. Vijay Ganesh is an assistant professor at the University of Waterloo since 2012. Prior to that he was a research scientist at MIT, and completed his PhD in computer science from Stanford University in 2007. Vijay’s primary area of research is the theory and practice of automated reasoning aimed at software engineering, formal methods, security, and mathematics. In this context he has led the development of many SAT/SMT solvers, most notably, STP, The Z3 string solver, MapleSAT, and MathCheck. He has also proved several decidability and complexity results relating to the SATisfiability problem for various mathematical theories. For his research, he has won over 21 awards including an ACM Test of Time Award at CCS 2016, two Google Faculty Research Awards in 2011 and 2013, and a Ten-Year Most Influential Paper Award at DATE 2008.
Amaury Pouly (MPI-SWS, Saarbrücken)
Date: Thursday, March 2 2017 – 14h00
Place: A006
Title: Solvability of Matrix-Exponential Equations.
We consider the problem of checking whether a switching system can reach a particular state. We show that even this problem is undecidable but decidable in some restricted cases. A switching system is a particular form of hybrid system very commonly used to model electrical systems with several states. It consists of a finite number of states and in each state, the system evolves according to a linear differential equation. This type of model is also used in modelisation to over-approximate the possible behaviours of a physical system when the transition rules between states are unclear. Our result shows a surprising connection with a number of theorems from algebraic and transcendental number theory, and Hilbert’s Tenth Problem. One possible interpretation of this work is that switching systems, despite their innocent look, are already too powerful a model of computation.
Mahsa Shirmohammadi (Oxford Univ.)
Date: Tuesday, January 17 2017 – 14h00
Place: A008
Title: Minimal probabilistic automata have to make irrational choices.
In this talk, we answer the question of whether, given a probabilistic automaton (PA) with rational transition probabilities, there always exists a minimal equivalent PA that also has rational transition probabilities. The PA and its minimal equivalent PA accept all finite words with equal probabilities. We approach this problem by exhibiting a connection with a longstanding open question concerning nonnegative matrix factorization (NMF). NMF is the problem of decomposing a given nonnegative n×m matrix M into a product of a nonnegative n×d matrix W and a nonnegative d×m matrix H. In 1993 Cohen and Rothblum posed the problem of whether a rational matrix M always has an NMF of minimal inner dimension d whose factors W and H are also rational. We answer this question negatively, by exhibiting a matrix for which W and H require irrational entries. As a corollary we obtain a negative answer to the question on rationality of minimal probabilistic automata. This talk is based on two papers in ICALP 2016 and SODA 2017.
Sophie Tourret (NII Tokyo)
Date: Thursday, January 12 2017 – 14h00
Place: B011
Title: Towards SAT Solving in Batches.
In this presentation, I will talk about my ongoing work on how to perform SAT solving by batches, i.e., by making multiple decisions at once in the CDCL loop. There are theoretical obstacles to this because the admissible strategy of CDCL assumes that conflict resolution and unit propagation have a higher priority than decisions and in the current CDCL framework if this strategy is not followed, a deadlock can occur where no rule can be applied and the satisfiability of the input formula is still unknown. A new formalism will be presented to solve this problem, that is inspired by results from [[1,2]. The considered method to verify this claim (Isabelle) and the practical context motivating this approach (SAT solving on GPU) will also be introduced. [[1] Weidenbach, C. Automated reasoning building blocks Correct System Design, Springer, 2015, 172-188 [[2] Blanchette, J. C., Fleury, M. & Weidenbach, C. A Verified SAT Solver Framework with Learn, Forget, Restart, and Incrementality, IJCAR, 2016
Martin Brain (Oxford Univ.)
Date: Tuesday, October 4 – 14h00
Place: A006
Title: Bit-Exact Automated Reasoning About Floating-Point Arithmetic.
The majority of new instrumentation and control systems, even those that are safety critical, make use of floating-point numbers. Thus the value of computer-generated assurance evidence (including verification, test coverage, etc.) depends on the correctness, completeness and efficiency of automated reasoning about floating-point expressions. In this talk we will review the SMT-LIB Theory of Floating-Point, its semantics and the rationale behind key design decisions as well as surveying the current state-of-the-art in solver technology and future research directions. We aim to provide system integrators with sufficient information to integrate floating-point support into SMT interfaces and solver developer enough ideas to work on the next generation of floating-point reasoning.
Yannick Chevalier (Univ. Toulouse III)
Date: Wednesday, March 23 – 14h00
Place: C103
Title: Vers la synthèse de composants sécurisés.
Lors d’Usenix Enigma 2016, le directeur de l’équipe d'”accès sur mesure” de la NSA a fait une liste des failles privilégiées permettant d’entrer dans un système. En dehors de l’ingénierie sociale, le point d’entrée privilégié dans les systèmes informatiques est dans l’écart qui existe entre les composants (protocoles, logiciels,…) que les administrateurs systèmes ont voulu déployer et les logiciels réellement existants. Cette différence vient de la perpétuation de composants obsolètes, de la non-application de patches, mais aussi de composants n’implémentant pas fidèlement leur spécification. Cet exposé portera sur ce dernier point, et sur les travaux ouvrant la voie vers la synthèse sécurisée de composants communiquants, parmi lesquels des protocoles cryptographiques ou des services et applications Web. Cet exposé portera dans un premier temps sur la définition d’une sémantique opérationnelle à partir d’une déclaration d’échanges de messages (travail commun avec Michaël Rusinowitch), et dans un second temps de techniques de synthèse de service lorsqu’il est fait abstraction des messages (travail commun avec MR et Walid Belkhir). Les travaux futurs consisteront à combiner ces deux approches pour générer automatiquement des implémentations de protocoles et de services à partir de composants sécurisés déjà prouvés.
Sylvain Salvati (Inria Bordeaux, LaBRI)
Date: Tuesday, March 22 – 14h00
Place: C103
Title: Behavioral verification of higher-order programs.
Higher-order constructions make their way into main stream programming languages like Java, C++, python, rust… These constructions bring new challenges to the verification of programs as they make their control flow more complex. In this talk, I will present how methods coming from denotational semantics can prove decidable the verification of certain properties of higher-order programs. These properties are expressed by means of finite state automata of the possibly infinite execution trees generated by the programs and can capture safety properties but also liveness and fairness properties.
Nicolas Ollinger (Univ. Orléans, LIFO)
Date: Monday, March 21 – 14h00
Place: C103
Title: The transitivity problem of Turing machines.
A Turing machine is topologically transitive if every partial configuration — that is a state, a head position, plus a finite portion of the tape — can reach any other partial configuration, provided that they are completed into proper configurations. In this talk, we study the computational complexity of transitivity combining a reduction technique called embedding that preserves dynamical properties with a remarkable small minimal aperiodic Turing machine called SMART. We further study the computational complexity of minimality, the property of every configuration reaching every partial configuration. This is joint work with Julien Cassaigne from I2M in Marseille, Anahí Gajardo and Rodrigo Torres-Avilés from the University of Concepción, Chile.
Xavier Urbain (ENSIIE)
Place: C103
Title: Preuve formelle pour essaims de robots : rassemblement dans le plan réel.
Les réseaux de robots mobiles reçoivent depuis quelques années une attention croissante de la part de la communauté de l’algorithmique distribuée. L’analyse de la faisabilité de certaines tâches en coopération par des essaims de robots autonomes est toutefois extrêmement ardue ; le modèle formel Pactole a été conçu pour aider à la vérification formelle mécanique (avec coq) de ce modèle émergent. Il permet à la fois de certifier des résultats d’impossibilité et de prouver la correction de protocoles distribués. À titre d’illustration je présenterai dans cet exposé un algorithme original universel pour le rassemblement de robots oublieux dans le plan réel. Cet algorithme est prouvé correct grâce à notre développement formel.
Gwen Salaün (Ensimag)
Date: Tuesday, March 15 – 14h00
Place: A006
Title: Automated Verification of Asynchronously Communicating Systems.
Recent software is mostly constructed by reusing and composing existing components abstracted as finite state machines. Asynchronous communication is a classic interaction mechanism used for such software systems. However, analyzing communicating systems interacting asynchronously via FIFO buffers is an undecidable problem. A typical approach is to make finite the corresponding state space by limiting the presence of communication cycles in behavioral models or by fixing buffer sizes. In this talk, our focus is on systems that are likely to be unbounded and therefore result in infinite systems. We do not want to restrict the system by imposing any arbitrary bounds. We first present the synchronizability property and then introduce a notion of stability. We prove that once the system is stable for a specific buffer bound, it remains stable whatever larger bounds are chosen for buffers. This enables us to check certain properties on the system for that bound and to ensure that the system will preserve them whatever larger bounds are used for buffers. We also prove that computing this bound is undecidable but we show how we succeed in computing these bounds for many typical examples using heuristics and equivalence checking. We will also overview several properties of interest that communicating systems must satisfy when obtained via projection from choreography specifications.
Vlad Rusu (Inria Lille)
Date: Monday, March 7 – 11h00
Place: C103
Title: Incremental Language-Independent Program Verification.
Reachability Logic (RL) is a recently introduced formalism for defining the operational semantics of programming languages and for specifying program properties. As a program logic RL can be seen as a language-independent generalisation of Hoare Logics. Thus, using RL one can specify programs independently on the languages in which the programs are written and one can design program-verification techniques that are also language-independent. Several such techniques have already been proposed, all of which have a circular nature: they take as input a set of RL formulas (specifying a given program that one wants to verify, as well as some of its subprograms), and allow formulas in the set to be circularly used in proofs of other formulas, or even in their own proof. Such circular reasoning is essential for dealing with repetitive behaviour (e.g., loops or recursive calls). The reasoning is sound, but only in a monolithic way, in the sense that if a proof succeeds on a given set of formulas then all the formulas in the set are semantically valid; but if the proof fails, then nothing can be inferred about any formula’s validity or invalidity. In practice, this means that one is left with no information after unsuccessful proof attempts. In this paper we deal with this issue by proposing an incremental method for proving RL formulas. The proposed method takes as input a given formula specifying a given program fragment, together with a (possibly empty) set of helper formulas that specify some if its subprograms and that are were already proved valid (e.g., using our method or any other sound method). Then, certain conditions equivalent to RL formula validity are automatically checked. In case of success, the newly proved formula can be incrementally used as a helper in the proofs of other formulas, thereby proving increasingly larger program fragments. But in case of failure, unlike the case of monolithic methods, one is not left without information – one still knows the validity of the already proved helper formulas, and can build on this knowledge to make progress in the program’s proof. We illustrate the approach by successfully verifying the nontrivial Knuth-Morris-Pratt string-matching algorithm, whose verification had previously been tried without success using a monolithic method.
Thomas Chatain (ENS Cachan & LSV)
Date: Wednesday, March 9 – 14h00
Place: A006
Title: Characterization of Reachable Attractors Using Petri Net Unfoldings.
Attractors of network dynamics represent the long-term behaviours of the modelled system. Their characterization is therefore crucial for understanding the response and differentiation capabilities of a dynamical system. In the scope of qualitative models of interaction networks, the computation of attractors reachable from a given state of the network faces combinatorial issues due to the state space explosion. In this paper, we present a new algorithm that exploits the concurrency between transitions of parallel acting components in order to reduce the search space. The algorithm relies on Petri net unfoldings that can be used to compute a compact representation of the dynamics. We illustrate the applicability of the algorithm with Petri net models of cell signalling and regulation networks, Boolean and multi-valued. The proposed approach aims at being complementary to existing methods for deriving the attractors of Boolean models, while being generic since it applies to any safe Petri net.
Ludovic Patey (Univ. Paris Diderot)
Date: Monday, December 14 – 14h00
Place: B013
Title: Introduction aux mathématiques à rebours.
Suite à la malédiction des théorèmes d’incomplétude de Gödel, nous sommes condamnés pour l’éternité à chercher de nouveaux axiomes pour compléter nos théories mathématiques. Dans cette démarche fondationnelle, les mathématiques à rebours s’interrogent sur les axiomes minimaux nécessaires pour prouver les théorèmes de la vie de tous les jours. Au cours de cette présentation, nous introduirons les outils des mathématiques à rebours, et illustrerons les deux principales observations du domaine: – La plupart des théorèmes classiques ne nécessitent que des axiomes faibles – Beaucoup de ces théorèmes sont équivalents à l’un parmi cinq ensembles d’axiomes. [[1] S. Simpson, Subsystems of second order arithmetic, 2nd ed., CUP, 2009 [[2] D. Hirschfeldt, Slicing the truth, to appear, 2013
Shane Mansfield (Univ. Oxford)
Date: Tuesday, July 7 – 14h00
Place: C005
Title: Contextuality, Cohomology & Paradox.
Contextuality is a key feature of quantum mechanics that provides an important non-classical resource for quantum information and computation. Sheaf theory can be used to provide a general treatment of contextuality in quantum theory. However, contextual phenomena are found in other fields as well: for example, database theory. I will discuss my recent work in collaboration with Abramsky, Barbosa, Kishida & Lal ([http://arxiv.org/abs/1502.03097|http://arxiv.org/abs/1502.03097]) in which we develop this unified view of contextuality. This provides two main contributions: firstly, it exposes a remarkable connection between contexuality and logical paradoxes; secondly, an important class of contextuality arguments can be demonstrated to have a topological origin. More specifically, it is shown that “All-vs-Nothing” proofs of contextuality are witnessed by cohomological obstructions.
Piotr Polesiuk (Université de Wroclaw)
Date: Tuesday, June 16 – 14h00
Place: A006
Title: Logical relations for coherence of effect subtyping.
In a programming language with subtyping, a coercion semantics explicits the conversions from one type to another. Such a semantics is typically defined on typing derivations rather than on typing judgments. To avoid semantic ambiguity, it is expected to be coherent, i.e., independent of the typing derivation for a given typing judgment. In this talk I will present step-indexed logical relations for establishing the coherence of coercion semantics of programming languages with subtyping. To illustrate the effectiveness of the proof method, I will give an overview of a coherence proof for a type-directed CPS translation from a call-by-value lambda calculus with delimited continuations. Additionally, I will pinpoint the most interesting aspects of a Coq formalization of this work, in particular those that deal with step-indexing.
Sabine Frittella (LIF, Marseille)
Date: Tuesday, June 9 – 14h00
Place: C103
Title: Display calculi for dynamic modal logics.
First, we introduce display calculi and the key notions to understand and use them ; we assume familiarity with the following notions: normal modal logic, Kripke frames, sequent calculi and lattices. Then we expose the meta-theorem for cut-elimination for display calculi. Finally, we present the design of a display calculi for the logic EAK (Epistemic Actions and Knowledge).
Julien Henry (Verimag)
Date: Tuesday, May 5 – 14h00
Place: B013
Title: Analyse statique de programmes par interprétation abstraite et procédures de décision SMT.
L’analyse statique de programme a pour but de prouver automatiquement qu’un programme vérifie certaines propriétés. L’interprétation abstraite est un cadre théorique permettant de calculer des invariants de programme. Ces invariants sont des propriétés sur les variables du programme vraies pour toute exécution. La précision des invariants calculés dépend de nombreux paramètres, en particulier du domaine abstrait et de l’ordre d’itération utilisés pendant le calcul d’invariants. Nos travaux de recherche proposent plusieurs extensions de cette méthode qui améliorent la précision de l’analyse. L’interprétation abstraite peut également être rendue plus précise en distinguant tous les chemins d’exécution du programme, au prix d’une explosion exponentielle de la complexité. Le problème de satisfiabilité modulo théorie (SMT), dont les techniques de résolution ont été grandement améliorées cette décennie, permettent de représenter ces ensembles de chemins implicitement. Nous proposons d’utiliser cette représentation implicite à base de SMT et de les appliquer à des ordres d’itération de l’état de l’art pour obtenir des analyses plus précises. Nous proposons ensuite de coupler SMT et interprétation abstraite au sein de nouveaux algorithmes appelés Modular Path Focusing et Property-Guided Path Focusing, qui calculent des résumés de boucles et de fonctions de façon modulaire, guidés par des traces d’erreur. Notre technique a différents usages: elle permet de montrer qu’un état d’erreur est inatteignable, mais également d’inférer des préconditions aux boucles et aux fonctions. Nous appliquons nos méthodes d’analyse statique à l’estimation du temps d’exécution pire cas (WCET). Dans un premier temps, nous présentons la façon d’exprimer ce problème via optimisation modulo théorie, et pourquoi un encodage naturel du problème en SMT génère des formules trop difficiles pour l’ensemble des solveurs actuels. Nous proposons un moyen simple et efficace de réduire considérablement le temps de calcul des solveurs SMT en ajoutant aux formules certaines propriétés impliquées obtenues par analyse statique. Nous montrons que ce cas particulier du WCET suggère qu’il est possible d’améliorer les solveurs SMT en général grâce à des méthodes dérivées de l’analyse statique de programmes. Enfin, nous présentons l’implémentation de Pagai, un analyseur statique pour LLVM, qui calcule des invariants numériques grâce aux différentes méthodes proposées. Nous avons comparé les différentes techniques implémentées sur des programmes open-source et des benchmarks utilisés par la communauté.
Timo Jolivet (Institut de Mathématiques de Toulouse)
Date: Friday, April 24 – 14h00
Place: A006
Title: Fractals, indécidabilité et automates à plusieurs rubans.
On démontre que la propriété “être d’intérieur vide” est indécidable pour une famille simple et naturelle de fractals (définis par des systèmes de fonctions itérées affines 2D). Les méthodes utilisées font intervenir des automates à plusieurs rubans et une variante du problème de correspondance de Post. Ce travail a été réalisé en collaboration avec Jarkko Kari.
Arnaud Spiwack (CRI, Mines ParisTech)
Date: Tuesday, April 21 – 11h00
Place: A006
Title: Of Coq and interactive proofs.
The Coq proof assistant is a language, based on Martin-Löf’s dependent type theory, which is a synthesis of a (functional) programming language and a mathematical language. As such Coq, and dependent type theory in general, has a unique outlook on formal methods and program correctness. Proofs and programs can be interleaved in many ways, making it possible to keep proofs of program properties local. A the extreme end of this spectrum, it is possible to write programs in the form of constructive mathematical proofs. In this talk, I will give an overview of the Coq proof assistant and then I will open the hood and show how the interactive proof facilities of Coq are implemented. Historically, they used an idea due to Robin Milner for the proof assistant LCF (which gave us the ML programming language), but dependent type theories such as Coq allow for a more robust design. The API and implementation of the interactive proof facilities of Coq rely on the notion of monad which has been popularised by the Haskell programming language. This yields a powerful and flexible API whose deduction rules closely match those of Coq, whereas the implementation based on Milner’s design was limited to an approximation of these rules.
Cyril Cassagnes (Université du Luxembourg)
Place: B013
Title: Distributed system design: there is still pitfalls on the road ahead.
Heterogenous and distributed computer systems become the norm. That’s why, in this talk, we will discuss the two following aspects: The first aspect is linked to systems heterogeneity that is both software and hardware. For software, this heterogeneity appears also through the functional and non-functional properties of components or agents forming the system. The second aspect of these systems is related to data. The value of digital data has moved information systems at a strategic position, which must process more and more data. This has created large distributed systems. Indeed, these system aim to be distributed in order to achieve better performance thanks to computation and storage distribution. In both cases, it’s important to be able to verify the system behaviour or some algorithms that constitute it, especially when these systems are deployed in our daily environment. Indeed, these system are complex to implement or model and continue to increase in complexity. That’s why, tools are required. For instance, it’s relevant for the first aspect to be able to emphasize emergent behaviour resulting from sub-system interactions at the implementation time of a system of systems in order to guaranty a level of safety and/or predictability of the system
Martin Delacourt (CMM, Santiago du Chili)
Date: Friday, April 17 – 14h00
Place: A006
Title: Ensembles limites d’automates cellulaires associés à une mesure de probabilité.
Dans le monde des systèmes dynamiques, on s’intéresse souvent aux comportements asymptotiques et en particulier à l’ensemble limite, c’est à dire l’ensemble des points du système qui peuvent être visités arbitrairement tard au cours d’une évolution. Pour affiner la perception des comportements asymptotiques, on ajoute une mesure de probabilités sur l’ensemble de configurations initiales et on oublie les parties de l’espace qui tendent à disparaître. On considère ici le cas particulier des automates cellulaires qui sont un modèle classique de systèmes dynamiques discrets. Constitués d’une infinité de cellules formant un graphe régulier et ayant un état parmi un ensemble fini, leur évolution se fait en appliquant une même règle locale en tous points simultanément. On donnera une caractérisation des ensembles limites associés à une mesure de probabilités, des grands ensembles classiques de sous-shifts qui peuvent être réalisés et un théorème de Rice sur ces ensembles.
Pierre Lescanne (ENS Lyon)
Date: Tuesday, April 14 – 14h00
Place: B013
Title: Coinduction, Equilibrium and Rationality of Escalation.
Escalation is the behavior of players who play forever in the same game. Such a situation is a typical field for application of coinduction which is the tool conceived for reasoning in infinite mathematical structures. In particular, we use coinduction to study formally the game called –dollar auction–, which is considered as the paradigm of escalation. Unlike what is admitted since 1971, we show that, provided one assumes that the other agent will always stop, bidding is rational, because it results in a subgame perfect equilibrium. We show that this is not the only rational strategy profile (the only subgame perfect equilibrium). Indeed if an agent stops and will stop at every step, whereas the other agent keeps bidding, we claim that he is rational as well because this corresponds to another subgame perfect equilibrium. In the infinite dollar auction game the behavior in which both agents stop at each step is not a Nash equilibrium, hence is not a subgame perfect equilibrium, hence is not rational. Fortunately, the notion of rationality based on coinduction fits with common sense and experience. Finally the possibility of a rational escalation in an arbitrary game can be expressed as a predicate on games and the rationality of escalation in the dollar auction game can be proved as a theorem which we have verified in the proof assistant COQ. In this talk we will recall the principles of infinite extensive games and use them to introduce coinduction and equilibria (Nash equilibrium, subgame perfect equilibrium). We will show how one can prove that the two strategy profiles presented above are equilibria and how this leads to a “rational” escalation in the dollar auction. We will show that escalation may even happen in much simpler game named 0,1-game.
Bernadette Charron-Bost (CNRS, LIX)
Date: Thursday, April 9 – 15h00
Place: C103
Title: Approximate Consensus in Highly Dynamic Networks.
(joint work with Matthias Függer and Thomas Nowak) We investigate the approximate consensus problem in highly dynamic networks in which topology may change continually and unpredictably. We prove that in both synchronous and partially synchronous networks, approximate consensus is solvable if and only if permanently, the communication graph has a rooted spanning tree. The striking point is that the spanning tree may continually change over time without preventing nodes from converging to consensus. Actually the class of averaging algorithms, which have the benefit of being memoryless and requiring no process identifiers, entirely captures the solvability issue of approximate consensus in that the problem is solvable if and only if it can be solved using any averaging algorithm. We develop a proof strategy which consists in a reduction to the “non-split” networks. It dramatically improves the best known upper bound on the decision times of averaging algorithms and yields a non-averaging algorithm for approximate consensus in non-anonymous networks with a decision time that is quadratic in the number of nodes. We also prove that a general upper bound on the decision times of averaging algorithms has to be exponential, shedding light on the high price of anonymity. Finally we apply our results to networked systems with a fixed topology and benign fault models to show that with n processes, up to 2n-3 link faults per round can be tolerated for approximate consensus, increasing by a factor 2 the tight bound for exact consensus established by Santoro and Widmayer.
Vincent Hugot (Inria Lille)
Date: Tuesday, April 7 – 11h00
Place: B013
Title: Model-checking term rewriting systems: reachability analysis guided by temporal properties.
We present a model-checking framework summarised as a variant of reachability analysis for term rewriting systems. Reachability analysis endeavours to show that “a bad state is never reached”, and relies on the computation of an over-approximation of the set of reachable terms, which is of course not calculable in general. We aim to generalise this approach, in the sense of lifting it from the specific statement “a bad state is never reached” to arbitrary specifications in some temporal logic — though in the case we study, the statements dictate the order in which the transitions of the system may occur. For now, the method supports a fragment of LTL sufficiently expressive to describe common safety properties. In a first step, translation rules reformulate the verification problem into an equivalent expression of propositional logic whose atoms are comparisons of languages obtained through rewriting, without regard for decidability; we call such a formula a rewrite proposition. In the second step, the rewrite proposition is given a concrete representation in terms of tree automata with or without constraints. Since the general problem is undecidable, these representations are sometimes approximations, for which we use constructions (tree automata completion) studied separately for reachability analysis; the end result is a positively approximated decision procedure for the general problem, ie. a procedure that yields “yes, the system is correct” or “maybe”. (Actually there may be several ways to approximate the proposition, and thus a set of such procedures is generated). We draw a few perspectives regarding the future development of this method, in particular the extension of the supported logic, and the introduction of rewrite strategies into the mix, e.g. innermost.
Marc Kaplan (Telecom ParisTech)
Place: B013
Title: Quantum attacks against iterated block ciphers.
We study the amplification of security against quantum attacks provided by iteration of block ciphers. In the classical case, the Meet-in-the-middle attack is a generic attack against those constructions. This attack reduces the time required to break double iterations to only twice the time it takes to attack a single ideal block cipher, where a naive cryptographer would expect double iteration to lead to a quadratic improvement of security. We introduce the quantized version of this attack which is an application of Ambainis’ celebrated quantum algorithm for the Element Distinctness problem. We then prove the optimality of the attack against two iterated ideal ciphers. An important consequence is that if a quantum adversary is capable of breaking a single encryption in time t, the time to break two ideal iterated cipher is at least t^(4/3). In other words, the amplification of security provided by iteration is much larger against quantum adversaries than against classical ones. We then investigate security amplification by composition further by examining the case of four iterations. We quantize a recent technique introduced by Dinur, Dunkleman, Keller and Shamir, called the dissection attack. In the classical case, this attack allows to save a great amount of memory. Surprisingly, the quantized version also leads to saving in time, compared to a meet-in-the-middle approach. This seems to indicate that when the number of iterations grows, the resistance against quantum attacks decreases.
Antoine Durand-Gasselin (TU Münich)
Date: Wednesday, April 1st – 14h00
Place: B013
Title: Model Checking Parameterized Asynchronous Shared-Memory Systems.
This is joint work with Javier Esparza, Pierre Ganty and Rupak Majumdar. These results have been submitted to CAV. We characterize the complexity of liveness verification for parameterized systems consisting of a leader process and arbitrarily many anonymous and identical contributors. Processes communicate through a shared, bounded-value register. While each operation on the register is atomic, there is no synchronization primitive to execute a sequence of operations atomically. We analyze the case in which processes are modeled by finite-state machines or pushdown machines and the property is given by a Büchi automaton over the alphabet of read and write actions of the leader. We show that the problem is decidable, and has a surprisingly low complexity: it is NP-complete when all processes are finite-state machines, and is PSPACE-hard and in NEXPTIME when they are pushdown machines. This complexity is lower than for the non-parameterized case: liveness verification of finitely many finite-state machines is PSPACE-complete, and undecidable for two pushdown machines. For finite-state machines, we characterize infinite behaviors using existential abstraction and semilinear constraints. For pushdown machines, we show how contributor computations of high stack height can be simulated by computations of many contributors, each with low stack height. Together, our results characterize the complexity of verification for parameterized systems under the assumptions of anonymity and asynchrony.
Nicolas Basset (University of Oxford)
Date: Tuesday, March 31 – 14h00
Place: B013
Title: Synthèse de contrôleurs pour les jeux stochastiques multi-composantes avec objectif multiple.
Je présenterai les résultats de deux articles récents concernant la synthèse de contrôleurs (aussi appelés stratégies) pour les systèmes modélisés par des jeux stochastiques avec conditions de gain multiple. Dans le premier article (CONCUR 2014), nous introduisons un produit parallèle de jeux stochastiques. Nous montrons comment les règles compositionnelles de vérification de type assomption-garantie des automates probabilistes (jeux stochastiques à un joueur) peuvent être relevées à notre cadre. Nous expliquons comment composer des stratégies gagnantes pour les composantes en une stratégie gagnante globale pour le jeu composé. Dans le second article (TACAS 2015), nous nous intéressons à la satisfaction presque-sûre de plusieurs objectifs de gain moyen (“mean-payoff“). Nous donnons une méthode de synthèse de stratégie epsilon-optimale pour ce type de condition de victoire. C’est à dire, étant donné un vecteur objectif Pareto optimal v, la stratégie produite permet de maintenir le gain moyen multi-dimensionnel au dessus du vecteur de seuil v – epsilon. Nous utilisons cette méthode de synthèse comme brique élémentaire dans la méthode compositionnelle de l’article précédent (CONCUR 2014), que nous illustrons sur un cas d’étude issue de l’avionique. Ce travail a été réalisé avec Clemens Wiltsche, Marta Kwiatkowska et Ufuk Topcu.
Lionel Rieg (Collège de France)
Date: Thursday, March 26 – 15h00
Place: B011
Title: Preuves formelles de sûreté en Coq : réseaux de robots et compilation de langages synchrones.
Dans une première partie, je présenterai le projet Pactole qui vise à construire un cadre formel en Coq pour raisonner sur des réseaux de robots. Ce cadre doit être assez souple pour pouvoir intégrer les différentes variantes de modèles (présence de robots byzantins, capacité de perception des robots, …) en changeant ses paramètres. J’illustrerai l’intérêt de ce cadre sur un problème fondamental : le rassemblement (sur une droite), qui cherche à rassembler des robots en un même point. En toute généralité, ce problème est connu pour être impossible, preuve formelle dans Pactole à la clé. Néanmoins, je donnerai un algorithme qui fonctionne hormis pour une unique position initiale, précisément celle utilisée pour montrer l’impossibilité dans le cas général. On verra ensuite comment généraliser ces résultats en dimension plus grande. Dans un second temps, je m’intéresserai à la compilation du langage synchrone Esterel. L’objectif de ce travail est de construire un compilateur prouvé d’Esterel vers les circuits, comme CompCert le fait de C vers l’assembleur. À l’inverse des langage usuels, les langages synchrones parlent de façon précise du temps, ce qui influent beaucoup sur leur conception. Ainsi, les difficultés de la compilation sont ainsi de nature très différentes, notamment car la cible n’est pas un langage de programmation. Je présenterai les enjeux de la compilation et une partie des différentes sémantiques qui permettent de relier le langage source à sa traduction en circuits.
Tarik Kaced (Univ. Hong Kong)
Date: Tuesday, March 24 – 11h00
Place: A006
Title: Optimal Non-Perfect Uniform Secret Sharing Schemes.
A secret sharing scheme is non-perfect if some subsets of participants cannot recover the secret value but have some information about it. This work is dedicated to the search of efficient non-perfect secret sharing schemes. The efficiency is measured by means of the information ratio, the ratio between the maximum length of the shares and the length of the secret value. In order to study perfect and non-perfect secret sharing schemes with all generality, we describe the structure of the schemes through their access function, a real function that measures the amount of information that every subset of participants knows about the secret value. We present new tools for the construction of secret sharing schemes. In particular, we construct a secret sharing scheme for every access function. We find a new lower bound on the information ratio that is better than the ones previously known. In particular, this bound is tight for uniform access functions. The access function of a secret sharing scheme is uniform if all participants play the same role in a scheme (e.g. ramp secret sharing schemes). Moreover, we construct a secret sharing scheme with optimal information ratio for every rational uniform access function.
Assalé Adjé (ONERA Toulouse)
Place: A006
Title: Vérification automatique d’invariants numériques de programmes par optimisation sous-contraintes et itérations sur les politiques.
En général, la vérification d’une propriété sur des programmes numériques est indécidable. Cependant, la théorie de l’interprétation abstraite permet de prouver qu’une propriété est vraie en construisant une sur-approximation calculable des valeurs possibles prises par les variables du programme et sur laquelle la propriété est vraie. Dans cette présentation, je montrerai comment construire une telle sur-approximation en utilisant des méthodes d’optimisation sous-contraintes. Dans un deuxième temps, je considérerai le problème suivant: comment obtenir une sur-approximation la plus précise possible. En effet, quand on s’intéresse, par exemple, à prouver que les valeurs atteignables sont bornées, la sur-approximation la plus précise possible est recherchée. Je montrerai comment un algorithme de calcul de point fixe issu de la théorie des jeux comme l’itération sur les politiques permet d’obtenir des bornes plus précises.
Thomas Chatain (LSV)
Date: Friday, March 13 – 14h00
Place: A006
Title: Implementability issues for real-time distributed systems.
Formal models for real-time distributed systems, like time Petri nets and networks of timed automata have proved their interest for the verification of real-time systems. On the other hand, the question of using these models as specifications for designing real-time systems raises several difficulties. Here we focus on the ones that are related to the distributed nature of the system. Implementing a model may be possible at the cost of some transformations, which make it suitable for the target device. In this talk, I present several results about semantics of distributed real-time systems and provide methods for the design of such systems.
Jannik Dreier (Electronic) Exams.__ [Jannik Dreier|http://people.inf.ethz.ch/jdreier/] (ETH Zürich)
Date: Tuesday, March 10 – 11h00
Place: B013
Title: Formal Verification of (Electronic) Exams.
Nowadays, students can be assessed not only by means of classical pencil-and-paper tests, but also using electronic exams. Such electronic exams can be taken in examination centers, or sometimes even from home via the internet. Electronic exams are appealing as they can reach larger audiences by simplifying the exam process, but they are exposed to new threats, including not only dishonest students, but also outside attackers, and misbehaving authorities. These threats are amplified by two issues: the lack of understanding of what security means for electronic exams (except the old concern about student cheating), and the absence of tools to verify whether an exam system is secure. In this talk, I will present formal definitions of several fundamental authentication, privacy and verifiability properties for exam protocols. As case studies, I will discuss three exam protocols: two recent electronic exam protocols, and the pencil-and-paper exam used by the University of Grenoble, showing that our approach can also cover classical exam systems. The automatic analysis using ProVerif highlights several weaknesses, underlining the need for formal verification.
Chantal Keller (Microsoft Research Cambridge & MSR-Inria Joint Centre)
Place: B013
Title: F*: Higher-Order Effectful Program Verification.
F* is an ML-like language designed for program verification. It is based on refinement types, a powerful extension of ML types to express properties about programs. These properties are automatically checked via the generation of verification conditions that are finally discharged by SMT solvers. F* provides a uniform framework to deal both with programs with effects and non-termination, for which one might want to establish some properties, and pure and terminating programs, that can also be used inside the logical refinements. To this end, it relies on a fine-grained control of effects and a termination checker based on a semantic criterion. F* finally generates code to various backends, ranging from OCaml to JavaScript. After giving an overview of this language, I will present some aspects of its internals, in particular the treatment of effects and termination, and the generation of verification conditions all the way down to SMT solvers. I will finally discuss ongoing work on a full certification of F*.
Henning Schnoor (Univ. Kiel)
Date: Wednesday, March 4 – 14h00
Place: A006
Title: Information-flow Security: Definitions, Characterizations and Verification Complexity.
We develop a theory for state-based noninterference in a setting where different security policies apply in different parts of a given system. Our theory comprises appropriate security definitions, characterizations of these definitions, for instance in terms of unwindings, algorithms for analyzing the security of systems with local policies, and corresponding complexity results. Additionally, we study the verification complexity for established notions of noninterference such as IP-security and TA-security. In both of these cases, we obtain NL algorithms, significantly improving on previously known bounds.
Eugen Zalinescu (ETH Zurich)
Date: Thursday, March 5 – 11h00
Place: A006
Title: Monitoring of Temporal First-order Properties with Aggregations.
In system monitoring, one is often interested in checking properties of aggregated data. Current policy monitoring approaches are limited in the kinds of aggregations they handle. To rectify this, we extend an expressive language, metric first-order temporal logic, with aggregation operators. Our extension is inspired by the aggregation operators common in database query languages like SQL. We provide a monitoring algorithm for this enriched policy specification language. We show that, in comparison to related data processing approaches, our language is better suited for expressing policies, and our monitoring algorithm has competitive performance.
Benjamin Hellouin (Univ. Andres Bello, Santiago)
Date: Tuesday, February 24 – 14h00
Place: A006
Title: Mesures limites dans les automates cellulaires.
Je m’intéresse au comportement des automates cellulaires sur une configuration initiale tirée au hasard, ce qui revient à étudier leur action sur l’espace des mesures de probabilité. Je présenterai un panorama de mes recherches sur les mesures limites de ces systèmes, qui peuvent être vues comme leur comportement asymptotique typique ou à leur sortie en tant qu’algorithme probabiliste. En dimension 1, les mesures atteignables en partant d’une mesure simple, e.g. la mesure uniforme, sont entièrement décrivables par des conditions de calculabilité. Porter ce résultat aux dimensions supérieures est un travail en cours qui a déjà donné des résultats partiels. Cependant cette approche de permet pas d’obtenir des mesures limites de support plein, qui ne peuvent être atteintes que par des dynamiques surjectives. Bien que les automates cellulaires surjectifs soient capables de calcul, leur action sur les mesures possède une forte rigidité qui est encore peu comprise. On conjecture expérimentalement que certaines sous-classes font converger toutes les mesures initiales simples vers la mesure uniforme. Nous produisons la première preuve complète de ce phénomène dans un automate cellulaire ayant des propriétés algébriques particulières, utilisant des structures autosimilaires dans son évolution temporelle.
Hicham Lakhlef (Univ. Franche-Comté)
Date: Tuesday, February 17 – 14h00
Place: A006
Title: Algorithmes distribués pour l’optimisation du déploiement des microrobots MEMS : des méthodes formelles sont nécessaires?
Dans ce séminaire je vais présenter quelques travaux que j’ai faits dans ma thèse. Je parlerai sur des algorithmes distribués pour la reconfiguration des micorobots MEMS. Les microrobots MEMS gagnent de plus en plus une attention croissante puisque ils peuvent effectuer plusieurs missions et tâches dans une large gamme d’applications, y compris la localisation d’odeur, la lutte contre les incendies, le service médical, la sécurité et le sauvetage. Ils peuvent faire des tâches dans des environnements qui sont caractérisés en étant non structurés, complexes, dynamiques, et inconnus. Pour faire ces missions et tâches les microrobots MEMS doivent appliquer des protocoles de redéploiement afin de s’adapter aux conditions du travail. Ces protocoles de reconfiguration dévoient traiter avec les caractéristiques des nœuds MEMS notamment les ressources (mémoire, énergie) limités. Aussi, ces algorithmes devraient ‘tre efficaces, évolutifs, robustes et utilisent seulement les informations locales. Dans ce séminaire, je présente des algorithmes de reconfiguration efficaces pour des microrobots MEMS modulaires. L’objectif de ces algorithmes est d’optimiser la topologie logique des microrobots en utilisant des protocoles de redéploiement distribués. Je présente et je compare deux algorithmes distribués de reconfiguration d’une chaîne à un carré. Le premier algorithme est dynamique dans le sens réveil-sommeil assurant la connexité du réseau durant le processus de reconfiguration et l’autre qui est plus rapide n’assure pas la connexité du réseau durant le processus de reconfiguration. Je montre comment utiliser le parallélisme en mouvements pour optimiser le temps d’exécution des algorithmes et minimiser le nombre de mouvements des nœuds tout en gardant le réseau connexe et les algorithmes robustes et évolutifs. En fin, je présente un algorithme de reconfiguration qui traite avec toute typologie de départ. Dans les perspectives, je montrerais que des méthodes formelles sont nécessaires pour prouver que les systèmes de microrobots programmés en utilisant des protocoles distribués vont toujours se comporter comme prévu, dans des conditions changeantes et dans une variété d’environnements incertains.
Xavier Urbain (CNAM)
Place: A006
Title: Un cadre pour la preuve formelle adapté aux réseaux de robots mobiles.
Les réseaux de robots mobiles reçoivent depuis quelques années une attention croissante de la part de la communauté de l’algorithmique distribuée. Si l’utilisation d’essaims de robots coopérant dans l’exécution de diverses tâches est une perspective séduisante, l’analyse de la faisabilité de certaines tâches dans ce cadre émergent est extrêmement ardue, en particulier si certains robots présentent des comportements dits byzantins, c’est-à-dire arbitraires voire hostiles. Pour obtentir des garanties formelles dans ce contexte, nous proposons un cadre mécanique formel fondé sur l’assistant à la preuve Coq et adapté aux réseaux de robots. Nous nous intéressons principalement dans cet exposé aux résultats d’impossibilité, fondamentaux en ce qu’ils établissent ce qui peut ou ne peut pas être réalisé et permettent de définir des bornes et, par là, l’optimalité de certaines solutions. Utiliser un assistant comme Coq travaillant à l’ordre supérieur nous permet d’exprimer aisément des quantifications sur les algorithmes, ceux-ci étant considérés comme des objets abstraits. Nous illustrons les possibilités offertes par notre développement en présentant les premières preuves formelles (et donc certifications) de certains résultats comme l’impossibilité de la convergence de robots amnésiques lorsqu’un tiers d’entre eux sont byzantins, ou encore l’impossibilité du rassemblement pour un nombre pair de robots évoluant dans R.
Erika Abraham (RWTH Aachen)
Date: Tuesday, June 12 – 14h00
Place: C005
Title: Reachability Analysis of Hybrid Systems.
Hybrid systems exhibit both dynamic and discrete behavior. Typically, the dynamic behavior stems from the continuous evolution of physical systems, whereas the discrete behavior stems from the execution steps of discrete controllers. As a modeling language we consider hybrid automata, extending discrete automata with continuous dynamics. Given a hybrid automaton model of a hybrid system, the perhaps most basic question one could be interested in is whether certain model states can be reached from a given initial state. Though this reachability problem sounds simple, it is undecidable for all but some very simple subclasses of hybrid automata. Nevertheless, there are different techniques to solve it in an incomplete manner. We discuss methods that use different geometric objects like polytopes, zonotopes, ellipsoids etc. to represent state sets, and apply a forward fixedpoint-based search to over-approximate the set of reachable states. Such methods need to determine successor sets under both discrete jumps and continuous evolution. The latter is often done by flowpipe construction, paving the whole flow by a set of geometric objects of the chosen type.
David Deharbe (Univ. Rio Grande do Norte)
Date: Tuesday, May 20 – 14h00
Place: B013
Title: b2llvm: B developments onto the LLVM.
We present b2llvm, a code generator for the B-method targetting LLVM intermediate format. b2llvm currently handles the following elements of the B language: simple data types, imperative instructions and component compositions. The translation rules from some essential constructs of the B language for implementations towards LLVM source code are presented and illustrated with examples. Some verification and validation strategies are proposed.
Jasmin Blanchette (TU Munich)
Date: Tuesday, April 29 – 14h00
Place: B013
Title: Proofs, Counterexamples, and Codatatypes for Isabelle/HOL.
Isabelle is a generic proof assistant that supports HOL (Higher-Order Logic), ZF (Zermelo-Fraenkel Set Theory), and TLA+ (Temporal Logic of Actions). My research so far has focused on Isabelle/HOL: (1) I enhanced the popular Sledgehammer proof tool, which is based on external automatic provers (e.g., E, SPASS, Vampire, and Z3); (2) I designed the SAT-based counterexample generator Nitpick; (3) together with a few colleagues, I am adding support for codatatypes and corecursive functions. In this talk, I will briefly summarize these strands of research and my future plans.
Clément Aubert (IML)
Date: Wednesday, March 19 – 14h00
Place: B013
Title: Quelques contributions de la Logique Linéaire à la théorie de la complexité.
En théorie de la démonstration, la Logique Linéaire propose une formalisation des preuves qui place leur dynamique au centre des investigations et porte une attention soutenue aux opérations structurelles. Depuis l’introduction de ses variantes dites bornées et allégées, la Logique Linéaire a contribué à la théorie de la complexité de différentes façons. Nous présenterons différentes avancées en nous focalisant sur les caractérisations implicites de classes de complexité, c’est-à-dire indépendantes des mesures sur des modèles abstraits. En utilisant une gamme d’outils syntaxiques et sémantiques, la Logique Linéaire est parvenue à proposer des caractérisations de l’exécution parallèle, du non-déterminisme, ou encore des programmes dit efficaces en temps ou en espace. Nous dresserons un rapide panorama de ces méthodes en les liant aux autres approches implicites de la complexité et en mettant en avant nos contributions.
Rohit Parikh (CUNY)
Date: Friday, November 8 – 15h00
Place: C005
Title: Knowledge is Power, and so is Communication.
The BDI theory says that people’s actions are influenced by two factors, what they believe and what they want. Thus we can influence people’s actions by what we choose to tell them or by the knowledge that we withhold. Shakespeare’s Beatrice-Benedick case in Much Ado about Nothing is an old example. Currently we often use Kripke structures to represent knowledge (and belief). So we will address the following issues: a) How can we bring about a state of knowledge, represented by a Kripke structure, not only about facts, but also about the knowledge of others, among a group of agents? b) What kind of a theory of action under uncertainty can we use to predict how people will act under various states of knowledge? c) How can A say something credible to B when their interests (their payoff matrices) are in partial conflict?
Iordanis Kerenidis (CNRS, LIAFA)
Date: Tuesday, October 15 – 14h00
Place: A008
Title: Tutorial on Quantum Communication.
Part of the [Journées Informatique Quantique|http://www.loria.fr/~sperdrix/JIQ13.html] 2013, October 14 and 15.
Josef Widder (TU Wien)
Date: Wednesday, October 9 – 14h00
Place: C005
Title: Parameterized Model Checking of Fault-tolerant Distributed Algorithms by Abstraction.
Fault-tolerant distributed algorithms are central for building reliable, spatially distributed systems. In order to ensure that these algorithms actually make systems more reliable, we must ensure that these algorithms are actually correct. Unfortunately, model checking state-of-the-art fault-tolerant distributed algorithms (such as Paxos) is currently out of reach except for very small systems. In order to be eventually able to model check such fault-tolerant distributed algorithms also for larger systems, several problems have to be addressed. In this talk, I will present recent work that considers modeling and verification of fault-tolerant algorithms that basically only contain threshold guards to control the flow of the algorithm. I will present an abstraction technique that allows parameterized model checking of safety and liveness properties of such algorithms. Our experimental data shows that our abstraction is precise, and allows automated verification of simple, fault-tolerant distributed algorithms for various fault models. As threshold guards are widely used in fault-tolerant distributed algorithms (and also in Paxos), efficient methods to handle them bring us closer to the above mentioned goal.
David Janin (LaBRI)
Date: Tuesday, October 8 – 15h00
Place: B013
Title: The philosopher diner revisited: a new algebraic approach for an old modeling problem ?
In the field of distributed algorithm, the philosopher diner is a typical algorithmical and modeling problem. How to model distributed states? Local transitions? Their combinations? Resulting global transitions? Correctness criteria? There already are many modeling proposals. In this talk, we will provide yet another one. We will show how inverse semigroup theory — a mathematical theory developed since the 50’s yet quite unknown in computer science — and the associated language theory provide robust concepts and tools for that modeling. This is a joint (exploratory) work with Anne Dicky to be presented at MSR 2013 in Rennes. Oddly enough to be mentioned, such an approach was inspired by recent experiments in music system modeling and programming.
Bruno Conchinha (ETH Zurich)
Date: Tuesday, September 24 – 14h00
Place: C005
Title: Symbolic Probabilistic Analysis of Off-line Guessing.
We introduce a probabilistic framework for the automated analysis of security protocols. Our framework provides a general method for expressing properties of cryptographic primitives, modeling an attacker who is more powerful than conventional Dolev-Yao attackers. It allows modeling equational properties of cryptographic primitives as well as property statements about their weaknesses, e.g. primitives leaking partial information about messages or the use of weak algorithms for random number generation. Moreover, we can use these properties to find attacks and estimate their success probability. Existing symbolic methods can neither model such properties nor find such attacks. We show that the probability estimates we obtain are negligibly different from those yielded by a generalized random oracle model based on sampling terms into bitstrings while respecting the stipulated properties of cryptographic primitives. As case studies, we use a prototype implementation of our framework to model non-trivial properties of RSA encryption and automatically estimate the probability of off-line guessing attacks on the EKE protocol.
Mark Ryan (University of Birmingham)
Date: Tuesday, July 23 – 14h00
Place: C005
Title: Malicious-but-cautious certificate authorities.
The “certificate authority” model for authenticating public keys of websites has been attacked in recent years, and several proposals have been made to reinforce it. We develop and extend a recent proposal called “certificate transparency”, so that it properly handles certificate revocation. We show how extended certificate transparency has applications beyond the web. In particular, we detail how it can be used to build a secure end-to-end email or messaging system using PKI with no trusted parties. We believe this finally makes end-to-end encrypted email as usable and user-friendly as encrypted web browsing is today. Underlying these ideas is a new attacker model appropriate for cloud computing, which we call “malicious-but-cautious”.
Florian Deloup (Institut de Mathématiques de Toulouse, Univ. Toulouse III)
Date: Tuesday, June 25 – 14h00
Place: C005
Title: Le genre des langages réguliers.
Si l’on oublie une partie de sa structure, un automate peut être regardé comme un graphe pour lequel la notion de genre est bien définie. Intuitivement, un graphe est de genre plus petit que g s’il existe un plongement du graphe dans une sphère à “g anses”. Bien entendu, du point de vue informatique, les automates sont plus riches que des graphes : ils calculent des langages et un langage régulier donné admet une infinité d’automates le reconnaissant. Il est alors naturel de définir le genre d’un langage régulier L comme le genre minimal des automates reconnaissant L. On exposera quelques résultats et quelques questions ouvertes sur ce nouvel invariant des langages.
Sylvain Salvati (INRIA, LaBRI)
Date: Wednesday, May 15 – 14h00
Place: C005
Title: Evaluation is MSOL compatible.
We consider simply-typed lambda calculus with fixpoint operators. Evaluation of a term gives as a result the Bohm tree of the term. We show that evaluation is compatible with monadic second-order logic (MSOL). This means that for a fixed finite vocabulary of terms, the MSOL properties of Bohm trees of terms are effectively MSOL properties of terms themselves. Theorems of this kind have been known for some graph operations: unfolding, and Muchnik iteration. Similarly to those results, our main theorem has diverse applications. It can be used to show decidability results, to construct classes of graphs with decidable MSOL theory, or to obtain MSOL formulas expressing properties of terms. Another application is decidability of a control-flow synthesis problem. It is a joint work with Igor Walukiewicz.
Joost-Pieter Katoen (Aachen University)
Date: Tuesday, April 16 – 14h00
Place: C005
Title: Analyzing Probabilistic Programs: Pushing the Limits of Automation.
Probabilistic programs are pivotal for cryptography, privacy and quantum programs, are typically a few number of lines, but hard to understand and analyse. The main challenge for their automated analysis is the infinite domain of the program variables combined with the occurrence of parameters. Here, parameters can range over concrete probabilities but can also encompass various aspects of the program, such as number of participants, size of certain variables etc. McIver and Morgan have extended the seminal proof method of Floyd and Hoare for sequential programs to probabilistic programs by making the program annotations real- rather than Boolean-valued expressions in the program variables. As in traditional program analysis, the crucial annotations are the loop invariants. The problem to synthesize loop invariants is very challenging for probabilistic programs where real-valued, quantitative invariants are needed. In this talk, we present a novel approach to synthesize loop invariants for the class of linear probabilistic programs. In order to provide an operational interpretation to the quantitative annotations, we give a connection between McIver and Morgan’s wp-semantics of probabilistic programs and parameterized Markov decision processes.
Binh Thanh Nguyen (ETH Zurich)
Date: Tuesday, April 9 – 14h00
Place: C005
Title: Sound Security Protocol Transformations.
We propose a class of protocol transformations, which can be used to # develop (families of) security protocols by refinement and # abstract existing protocols to increase the efficiency of verification tools. We prove the soundness of these transformations with respect to an expressive security property specification language covering secrecy and authentication properties. Our work clarifies and significantly extends the scope of earlier work in this area. We illustrate the usefulness of our approach on a family of key establishment protocols.
Induction in SMT: Seminar day
Date: Wednesday, March 27 –
Place: B013
11:00 – 12:00 __Decidability of inductive theorems based on rewriting induction__. Takahito Aoto Rewriting induction (Reddy, 1990) is a method for proving inductive theorems of term rewriting systems (TRSs). It is known that generally it is undecidable that given equation and TRS whether the equation is an inductive theorem of the TRS. Toyama (2002) and Falk and Kapur (2006) gave some characterization of equations and TRSs for which it is decidable whether the equation is the inductive theorem of the TRS or not, based on rewriting induction, that is, using the rewriting induction as the underlying decision procedure. We give an abstract view of this approach, and try to integrate some known decidable classes. 15:00 – 16:00 __Rewriting Induction + Linear Arithmetic = Decision Procedure__. Stephan Falke. This talk discusses new results on the decidability of inductive validity of conjectures. For these results, a class of term rewrite systems (TRSs) with built-in linear integer arithmetic is introduced and it is shown how these TRSs can be used in the context of inductive theorem proving. The proof method developed for inductive theorem proving couples (implicit) inductive reasoning with a decision procedure for the theory of linear integer arithmetic with (free) constructors. The effectiveness of the new decidability results on a large class of conjectures is demonstrated by an evaluation of the prototype implementation Sail2. 16:00 – 17:00 __Noetherian Induction for First-Order Reasoning__. Sorin Stratulat. Noetherian induction is a powerful mathematical principle that allows to reason on sets of unbounded number of elements, provided that there is a Noetherian ordering that forbids the construction of any infinite strictly decreasing sequence of elements. In first-order logic, we can distinguish two instantiations of it, depending on the kind of elements we are reasoning on: terms or (first-order) formulas. The term-based principles rely on schemata. An induction schema attaches a set of formulas as induction hypotheses (IH) to another formula, called induction conclusion, with the intention that the IHs be used in the proof of the induction conclusion. Widely-spread among proof assistants, it can be easily integrated into sequent-based inference systems as a separate inference rule. The induction reasoning is local, at schema level, and the induction orderings may differ from one schema to another. However, it is not lazy since the IHs should be defined (sometimes long) before their use or are not used at all. Moreover, it cannot manage naturally the mutual induction with other formulas from the proof. The formula-based principles are issued from the Musser’s inductionless induction (or proof by consistency) method based on the Knuth-Bendix completion algorithm. The implementing reasoning methods are lazy and allow mutual induction. On the other hand, they are reductive, i.e. the newly derived formulas in the proof should be smaller (or equal), at the ground level, than the currently processed formula, and the induction ordering is unique for a given proof. In this talk, we sketch a cycle-based induction method that generalizes and keeps the best features of the above methods. The application of any IH is validated by a cycle of formulas governed by an induction ordering. The reasoning is local, at cycle level, reductive-free, and naturally performs mutual and lazy induction. From a theoretical point of view, our method synthesizes the overall usage of Noetherian induction reasoning in first-order logic. From a practical point of view, it allows for less restrictive specifications, more efficient implementations and proof certification processes.
Professor Neil Jones (University of Copenhagen, visiting UL – LORIA)
Date: Wednesday, March 27 – 14h00
Place: A008
Title: Obfuscation by Partial Evaluation of Distorted Interpreters
(with Roberto Giacobazzi, Isabella Mastroeni, Verona; from PEPM 2012) How to construct a general program obfuscator? We present a novel approach to automatically generating obfuscated code P’ from any program P whose source code is given. Start with a (program-executing) interpreter interp for the language in which P is written. Then “distort” interp so it is still correct, but its specialization P’ with respect to P is transformed code that is equivalent to the original program, but harder to understand or analyze. Potency of the obfuscator is proved with respect to a general model of the attacker, modeled as an approximate (abstract) interpreter. A systematic approach to distortion is to make program P obscure by transforming it to P’ on which (abstract) interpretation is incomplete. Interpreter distortion can be done by making residual in the specialization process sufficiently many interpreter operations to defeat an attacker in extracting sensible information from transformed code. Our method is applied to: code flattening, data-type obfuscation, and opaque predicate insertion. The technique is language independent and can be exploited for designing obfuscating compilers.
Christophe Calvès (UL – LORIA)
Date: Tuesday, March 26 – 14h00
Place: A008
Title: Multi-Focus Rewrite Rules
Rewrite rules are frequently used to perform program transformations. This includes compilation (optimisations, code generation, …) and evaluation to specify program semantics, for state exploration or just interpretation. This talk presents multi-focus rewrite rules and strategies, able to traverse, match and rewrite, in a single rule, several subterms of a subject. These rules are particulary well-suited for programming-languages semantics’s specification as it often requires to consider serval elements (code fragment, memory, …) to perform actions. We will demonstrates the modularity and compositionality of our approach with the specification, in TOM (a compiler embedding algebraic terms and rewrite rules in many languages), of the semantics of a ML-like toy language.
Quantic Day
Date: Thursday, March 21 – 10h00-18h00
Place: C103
Le jeudi 21 mars aura lieu une journée sur la thématique de l’informatique quantique en C103.
10h00 – Accueil
10h30 – 11h15 Gilles Dowek – TBA
11h15 – 12h00 Pablo Arrighi – Quantum Cellular Automata: structure, universality
14h00 – 14h45 Maarten Van den Nest – Classical simulation complexity of quantum circuits.
14h45 – 15h30 Emmanuel Jeandel – Dynamique des modèles de calcul
16h15 – 16h45 Benoit Valiron – Quipper: a scalable quantum programming language
16h45 – 17h15 Vincent Nesme – Marches quantiques et Bancs de filtres
17h15 – 17h45 Alejandro Diaz-Caro – Vectorial types, non-determinism and probabilistic systems: towards a computational quantum logic
Hans van Ditmarsch (CNRS – LORIA)
Date: Tuesday, March 12 – 14h
Place: A008
Title: Russian Cards, communication protocols, and cryptography
Consider the following logic puzzle, known as ‘Russian Cards’: From a pack of seven known cards 0,1,2,3,4,5,6 Alice and Bob each draw three cards and Eve gets the remaining card. How can Alice and Bob openly (publicly) inform each other about their cards, without Eve learning of any of their cards who holds it? Clearly Alice and Bob are required to devise a secure protocol to communicate their secrets to each other, and Eve is the eavesdropper. Unlike traditional security settings, initially the three agents share knowledge: how many cards there are, and that everybody knows his own cards. This riddle, similar riddles, and things such as schemes for bidding in Bridge, have played a role in the development of information-based cryptography. One original publication (not treating this riddle) is by Fischer & Wright: Bounds on Secret Key Exchange Using a Random Deal of Cards, Journal of Cryptography, 1996. I will present and discuss solutions to the riddle. There are several solutions. I will explain its interest to the dynamic epistemic logic community: various ‘near solutions’ can be described in terms of knowledge, common knowledge, and preservation of knowledge properties after update (typically, making ignorance public can leak information). I will summarily present some recent results more or less in combatorial mathematics. This includes work with my – until recently – collaborators in Sevilla. Some relevant publications: * M.H. Albert, R.E.L. Aldred, M.D. Atkinson, H.P. van Ditmarsch, C.C. Handley, Safe communication for card players by combinatorial designs for two-step protocols. Australasian Journal of Combinatorics, 33:33-46, 2005. * Andrés Cordón-Franco, Hans van Ditmarsch, David Fernández Duque, Joost J. Joosten, Fernando Soler-Toscano. A secure additive protocol for card players. Australasian Journal of Combinatorics 54: 163-175, 2012. * Colleen Swanson, Douglas Stinson. Combinatorial Solutions Providing Improved Security for the Generalized Russian Cards Problem. Designs, Codes and Cryptography; Published online first November 2012.
Benoît Groz (Université de Tel-Aviv)
Date: Friday, March 1 – 14h
Place: A008
Title: Les expressions régulières déterministes
Les expressions régulières déterministes sont utilisées dans plusieurs langages de schémas XML (DTDs, XML Schémas). Nous présentons des techniques permettant de tester en temps linéaire si une expression régulière est déterministe. Nous montrons aussi que l’on peut décider en temps linéaire l’appartenance d’un mot au langage d’une l’expression pour de larges familles d’expressions déterministes. Nos algorithmes construisent des structures de données sur l’arbre de syntaxe de l’expression alors que les algorithmes classiques pour ces problèmes reposent sur la construction de l’automate de Glushkov, une construction quadratique lorsque la taille de l’alphabet n’est pas bornée.
Kumbakonam Govindarajan Subramanian (Prof. at Universiti Sains Malaysia)
Date: Tuesday, February 12 – 14h
Place: A008
Title: Parikh Matrices of Words and Their Properties
Words and their Combinatorial properties constitute a growing area of research that can be classified under Discrete Mathematics. A number of fundamental and significant mathematical results have been revealed in this study. This area can also be considered as a study of discrete sequences or a study of combinatorial properties of free monoids. A word is a finite or an infinite sequence of symbols taken from a finite set, called alphabet. A subword of a given word w is a subsequence of w. In the recent past, the notion of a special matrix, called Parikh Matrix, associated with a word, has been introduced and investigated. This matrix gives information on the number of certain subwords of a given word. This talk is intended to provide some background and highlight interesting results established by researchers in this topic of Parikh matrix. An extension of the notion of a Parikh matrix to picture arrays, which are rectangular arrangements of symbols, will also be indicated.
Etienne Lozes (LSV – Univ. Cassel)
Date: Friday, February 15 – 14h
Place: A008
Title: Preuves de programmes à échanges de messages non copiants dans Heap-Hop
La conception de programmes à échanges de messages peut s’avérer difficile lorsque le coup occasionné par la recopie du message de l’espace d’adressage de l’envoyeur à celui du receveur devient un facteur critique pour l’efficacité du programme. Une solution à ce problème consiste à utiliser un espace d’adressage commun à l’envoyeur et au receveur. Dans cette approche, la recopie peut être évitée, mais la bonne entente entre envoyeur et receveur est nécessaire quant aux accès à la mémoire. Jules Villard et moi-même avons introduit une extension de la logique de séparation qui permet de prouver de tels programmes corrects du point de vue des accès mémoires. Inspiré du langage Sing#, notre système introduit une notion de contrat de canal qui permet de garantir des propriétés plus fortes que celles habituellement établies par la logique de séparation: les programmes prouvés sont garantis sans erreurs de communications (message oublié, message inattendu) et sans fuites mémoires. J’illustrerai ces idées sur l’outil Heap-Hop, et donnerai un aperçu de nos développements en cours sur la théorie des contrats de canaux.
Dominique Unruh (University of Tartu, Tallinn)
Date: Tuesday, February 5 – 14h
Place: A008
Title: Everlasting Security through Quantum Cryptography
Cryptography is a powerful tool for processing confidential data. Cryptographic protocols are, however, only as secure as the underlying encryption schemes. And we do not know whether these might not be broken at some point in the future. This leaves us in a difficult situation: If we process highly sensitive data using cryptographic protocols, an attacker might simply record all messages. Should the underlying encryption scheme be broken in the future, the attacker will then be able to decrypt all confidential data in retrospect. For highly confidential data (such as, e.g., medical records) such a situation is not acceptable. A way out is “everlasting security”. A protocol with everlasting security guarantees that all data involved in the protocol stays secure, even if at some point in the future all the underlying encryption schemes are broken. Unfortunately, with traditional cryptographic techniques, everlasting security can only be achieved in very limited situations. In this talk, we explain how everlasting security can be achieved for a wide variety of tasks by using quantum cryptography, i.e., by making use of quantum mechanical effects in the cryptographic protocol. (No prior knowledge on quantum cryptography is required.)
Stefan Hetzl (INRIA Saclay)
Date: Tuesday, December 18 – 14h
Place: A008
Title: Algorithmic Structuring and Compression of Proofs
This talk will describe a method for the compression of formal proofs by the computation of new lemmas. We use tree grammars for an efficient treatment of the term-structure of a first-order proof. Our algorithm works by first minimizing a tree grammar which, in a second step, induces a formula that serves as abbreviating lemma. Finally, I will outline an approach to extending this work to a method for inductive theorem proving.
Nicolas Pouillard (DemTech project, IT University of Copenhagen (ITU))
Date: Friday, November 9 – 11h
Place: A008
Title: Proving cryptographic schemes in Agda (Dependently typed functional programming for Alice and Bob)
In order to gain confidence in cryptographic schemes and primitive operations, cryptographers use a combination of theories. These theories enable us to mathematically prove security properties of these constructions. Probability theory, game theory, number theory and complexity theory are often all needed at the same time. We aim at realizing these proof arguments in Type Theory using Agda. Our current approach is to carefully use abstractions and dependently typed functional programming techniques to elegantly prove security properties. Finally we aim at reducing the usual gap between the formal description of the construction and its actual implementation. We do so by writing in a functional style supporting extraction into low level circuits.
Sergueï Lenglet (Université de Lorraine, LORIA)
Date: Tuesday, October 23 – 14h
Place: A008
Title: Bisimulations et preuves de congruence dans les langages d’ordre supérieur
Les bisimilarités sont des équivalences qui mettent en relation les termes d’un langage qui ont le même comportement (pour une certaine définition de “même comportement”). Pour prouver la correction d’une bisimilarité, il est nécessaire de prouver qu’elle est une congruence, c.à.d. préservée par contextes : si deux termes sont bisimilaires, alors ils sont toujours bisimilaires une fois placé dans n’importe quel contexte. C’est un problème particulièrement difficile dans les langages d’ordre supérieur, tels le lambda-calcul ou le pi-calcul d’ordre supérieur (dans lequel les messages échangés contiennent des processus exécutables). Pour les langages dérivés du lambda-calcul, il existe une méthode systématique pour prouver la congruence, appelée méthode de Howe. Nous verrons comment adapter la méthode de Howe au pi-calcul d’ordre supérieur. Je mentionnerai aussi rapidement en fin d’exposé mes autres travaux, pour vous donner une idée de mes centres d’intérêts en recherche et éventuellement initier des collaborations au sein du département.
Serdar Erbatur (SUNY Albany)
Date: Tuesday, October 16 – 14h
Place: A008
Title: Unification in Blind Signatures
Blind signatures are signature schemes that keep the content confidential and have applications in modern cryptography for electronic voting and for digital cash schemes. We study unification modulo axioms that originate from RSA implementation of blind signatures. In this talk, we will present one such axiom called E. The axiom E has an undecidable unification problem. It also specifies that a binary operator distributes synchronously over another binary operator, hence called ”synchronous distributivity”. Furthermore, it appears that this is the weakest equational theory which has an undecidable unification problem. Next, we briefly present the axiom F that is similar to E and show that it has a decidable unification problem. This last axiom is not related to blind signatures but satisfies a property of El Gamal encryption.These are only some of many possible axiomatizations of algebraic properties of blinding schemes. The results we obtained will be useful to analyze the protocols with blinding when the algorithms are integrated into the protocol analysis tools.
Mark D. Ryan (University of Birmingham)
Date: Tuesday, July 10 – 11h
Place: A008
Title: Corruption evidence in electronic voting
Coercion-resistance is a fundamental, and strong, property of electronic voting systems. It states that a voter should be able to cast his vote as intended, even in presence of a coercer that may try to force him to cast a different vote. Especially in remote voting, coercion-resistance is extremely hard to achieve and leads either to schemes of questionable usability or to trust assumptions that are difficult to meet in practice. We propose a change of perspective, replacing the requirement of coercion-resistance with a requirement that we call “corruption-evidence”: there should be public evidence of the amount of coercion and other corruption that has taken place during a particular execution of the voting system. Depending on the significance of this amount, election authorities can decide to consider the election valid or not, leading to disincentivisation of coercion.
Barbara Kordy (Université du Luxembourg)
Date: Tuesday, May 15 – 14:00
Place: Amphi C
Title: Attack-Defense Tree Methodology for Security Assessment
Attack-defense trees are a novel methodology for graphical security modeling and assessment. They extend the well established formalism of attack trees, by allowing nodes that represent defensive measures to appear at any level of the tree. This extension enlarges the modeling capabilities of attack trees and makes the new formalism suitable for representing interactions between possible attacks and corresponding defensive measures. This talk will give an overview of the attack-defense tree methodology. After introducing the syntax as well as possible semantics for attack-defense trees, a number of interesting questions, including equivalent representations of security scenarios, quantitative analysis using attack-defense trees and computational complexity of the considered model, will be discussed. Bio: Barbara Kordy is a postdoctoral researcher at the University of Luxembourg. She is currently leading the ATREES project which focuses on modeling and evaluating vulnerability scenarios using attack-defense tree methodology. Her research interests include formal methods for security assessment, privacy and anonymity issues and analysis of security protocols. She received her Ph.D. and master degree in computer science from the University of Orléans in France. She also holds a master degree in mathematics obtained at the University of Silesia in Poland.
Sergueï Lenglet (Université Paris-Diderot, PPS)
Date: Monday, April 23 – 14:00
Place: A008
Title: Bisimulations pour les opérateurs de contrôle délimité
Nous présentons les premiers résultats sur la définition d’une théorie comportementale pour un lambda-calcul étendu avec les opérateurs de contrôle délimité shift et reset. Dans un premier temps, nous définissons une notion d’équivalence comportementale, que nous cherchons ensuite à caractériser à l’aide de bisimulations (applicative et de forme normale). Nous étudions aussi la relation entre équivalence comportementale et une autre équivalence basée sur une transformation CPS.
Markulf Kohlweiss (Microsoft Research Cambridge)
Date: Thursday, April 5 – 14:00
Place: A006
Title: Malleability in Modern Cryptography
In recent years, malleable cryptographic primitives have advanced from being seen as a weakness allowing for attacks, to being considered a potentially useful feature. Malleable primitives are cryptographic objects that allow for meaningful computations, as most notably in the example of fully homomorphic encryption. Malleability is, however, a notion that is difficult to capture both in the hand-written and the formal security analysis of protocols. In my work, I look at malleability from both angles. On one hand, it is a source of worrying attacks that have, e.g., to be mitigated in a verified implementation of the transport layer security (TLS) standard used for securing the Internet. On the other hand, malleability is a feature that helps to build efficient protocols, such as delegatable anonymous credentials and fast and resource friendly proofs of computations for smart metering. We are building a zero-knowledge compiler for a high-level relational language (ZQL), that systematically optimizes and verifies the use of such cryptographic evidence. We recently discovered that malleability is also applicable to verifiable shuffles, an important building block for universally verifiable, multi-authority election schemes. We construct a publicly verifiable shuffle that for the first time uses one compact proof to prove the correctness of an entire multi-step shuffle. In our work, we examine notions of malleability for non-interactive zero-knowledge (NIZK) proofs. We start by defining a malleable proof system, and then consider ways to meaningfully ‘control’ the malleability of the proof system. In our shuffle application controlled-malleable proofs allow each mixing authority to take as input a set of encrypted votes and a controlled-malleable NIZK proof that these are a shuffle of the original encrypted votes submitted by the voters; it then permutes and re-randomizes these votes and updates the proof by exploiting its controlled malleability. Short Bio: I am a postdoc at Microsoft Research Cambridge in the Programming Principles and Tools group. I did my PhD at the COSIC (Computer Security and Industrial Cryptography) group at the K.U.Leuven, and my master thesis at IBM Research Zurich. My research focus is on privacy-enhancing protocols and formal verification of cryptographic protocols.
Paul Gibson (Telecom & Management SudParis)
Date: Friday, April 6 – 14:00
Place: A008
Title: Composing objects and services: a formal model driven development (MDD) approach
In this talk, we will present the problem of composing systems and reasoning about the correctness of such compositions. We will present semantics linking objects and services, and discuss the use of formal methods during MDD of sysems of services. Different formalisms will be used to highlight the important issues, but we will focus on our on-going research using Event-B to model requirements, refined into designs, that are to be transformed into OO code (with concrete examples in Java).
David Delahaye (CNAM)
Date: Friday, March 23 – 14:00
Place: A008
Title: Tableaux modulo théories en utilisant la superdéduction : une application à la vérification de règles de preuve B avec l’outil de démonstration automatique Zenon
Nous proposons une méthode qui permet de développer les tableaux modulo théories en utilisant les principes de la superdéduction, parmi lesquelles la théorie est utilisée pour enrichir le système de déduction avec de nouvelles règles de déduction. Cette méthode est présentée dans le cadre de l’outil de démonstration automatique Zenon, et nous décrivons un exemple d’application à la théorie des ensembles de la méthode B. Cela permet de fournir un outil de démonstration alternatif à celui de l’Atelier B, qui peut être utilisé pour vérifier les règles de preuve B en particulier. Nous proposons également des benchmarks dans lesquels cet outil de démonstration est capable de vérifier automatiquement une partie des règles provenant de la base de règles ajoutées maintenue par Siemens IC-MOL.
Martin Gagné (VERIMAG, Université Joseph Fourier)
Date: Tuesday, March 20 – 14:00
Place: A008
Title: Towards Automated Security Proofs of Block-Cipher Based Cryptographic Primitives
A block cipher algorithm (e.g. AES, Blowfish, DES, Serpent and Twofish) is a symmetric key algorithm that takes a key, a fixed size input message block and produces a fixed size output block. Block ciphers are among the most ubiquitous algorithms in symmetric cryptography, and can be used to construct encryption schemes, message authentication codes, collision-resistant hash functions, and many other primitives. As with any other cryptographic construction, we require that these primitives be proven secure, usually through a reduction of the security of the primitive to the security of the block cipher. Since block-cipher based primitives are usually constructed using a fairly small set of operations, we believe that Hoare logic is an ideal tool for automatically proving their security. We present a Hoare logic for proving the security of block cipher modes of operation. Our logic is fairly simple, requires only three invariants, and can be used to prove the security of all the common modes of operation. We also present a second Hoare logic for proving the security of message authentication codes. This logic is much more complicated than the first, and we discuss the different challenges posed by this primitive.
Emmanuel Jeandel (LIF, Université de Provence)
Date: Monday, March 19 – 14:00
Place: A008
Title: Modèles de calcul non initialisés et Systèmes Dynamiques
Dans cet exposé, on considère les modèles de calcul usuels (machines de Turing, machines à compteurs) comme des systèmes dynamiques, correspondant à l’application de leur fonction de transition sur une configuration quelconque. En particulier, dans ce contexte, il est nécessaire d’observer leur comportement partant de n’importe quelle configuration, plutôt que d’une configuration initiale donnée. Le problème devient alors totalement différent. Par exemple, la preuve de l’indécidabilité de l’existence d’une configuration sur laquelle une machine de Turing ne s’arrête pas (Hooper, 1966) est bien plus délicate que l’indécidabilité de l’arrêt d’une machine de Turing partant d’une configuration donnée (Turing, 1937), et ce phénomène se répète pour la majorité des modèles de calcul. Après une présentation générale de la problématique, nous essaierons dans cet exposé d’illustrer les méthodes permettant de résoudre ou contourner le problème dans le cadre du modèle de calcul des pavages par tuiles de Wang, qui convient fort bien à un traitement dynamique.
Christophe Calvès (INRIA Lille-Nord Europe)
Date: Tuesday, March 13 – 14:00
Place: A008
Title: Specifying Kermeta through Rewriting Logic
Kermeta is a Domain-Specific Language designed as a kernel for metamodel engineering. It unifies metamodeling, constraints, semantics and transformations into a coherent and statically typed language. In a nutshell, it has an object-oriented base similar to Java but with handling of metamodeling aspects such as attributes, associations, multiplicity, … and a “design by contract” approach in the style of Eiffel. Kermeta enable users to give a semantics to their metamodel. Unfortunately, even if Kermeta is a mature software, it has not been(formally) specified yet. This lack prevents any (formal) verification on metamodel’s semantics. In this talk we will present a specification of Kermeta’s semantics through the formal definition of a compiler from Kermeta to an object-oriented stack-based abstract machine. Both the compiler and the abstract machine are defined in the specification framework K, based on rewriting logic. We will show the problems raised by Kermerta’s particularities (such as attribute/reference handling, design by contract, multiple inheritance, genericity, …), how to formalize them and compile them correctly.
Marian Baroni (University ’Dunarea de Jos’ of Galati, Romania)
Date: Thursday, December 8 – 16:00
Place: A006
Title: A Bishop-style constructive theory of posets
Partially ordered sets are examined from the point of view of Bishop’s constructive mathematics, which can be viewed as the constructive core of mathematics and whose theorems can be translated into many formal systems of computable mathematics. The features of Bishop’s constructivism are illustrated by a simple example, the least-upper-bound principle. Unlike the classical case, this statement does not hold constructively. However, the order completeness of the real number set can be expressed constructively by an equivalent condition for the existence of supremum, a condition of order locatedness which is vacuously true in the classical setting. A generalization of this condition leads to a reasonable definition of order completeness for an arbitrary partially ordered set.
Antoine Taveneaux (Université Paris Diderot, LIAFA)
Date: Tuesday, December 6 – 14:00
Place: A006
Title: Calculabilité et mathématiques effectives
Dans de nombreux théorèmes mathématiques, le quantificateur existentiel n’est pas “effectif”, dans le sens où l’objet dont on prouve l’existence n’est pas calculable. La théorie de la calculabilité porte principalement sur la comparaison des ensembles d’entiers, du point de vue de leur calculabilité mutuelle. Nous verrons que l’on peut aussi comparer l’effectivité des théorèmes, et faire en quelque sorte de la “calculabilité sur les théorèmes”. Cet exposé sera fait en français et ne suppose presque aucun prérequis en théorie de la calculabilité.
Olivier Pereira (UCL Crypto Group, Louvain-la-Neuve)
Date: Monday, November 28 – 10:00
Place: A008
Title: Running Mixnet-Based Elections with Helios
The Helios voting system is an open-audit web-based voting system that has been used by various institutions in real-stake elections during the last few years. While targeting the simplicity of the election workflow, the homomorphic tallying process used in Helios limits its suitability for many elections (large number of candidates, specific ballot filling rules…). We present a variant of Helios that allows an efficient mixnet-based tallying procedure, and document the various choices we made in terms of election workflow and algorithm selection. In particular, we propose a modified version the TDH2 scheme of Shoup and Gennaro that we found particularly suitable for the encryption of the ballots. Our Helios variant has been tested during two multi-thousand voter elections. The lessons taken from the first of these elections motivated some changes into our procedure, which have been successfully experimented during the second election. This is joint work with Philippe Bulens and Damien Giry.
Roberto Giacobazzi (Univ. Verona, Italy)
Date: Tuesday, November 15 – 10:30
Place: B011
Title: Obfuscation by Partial Evaluation of Distorted Interpreters
Joint work with Neil D. Jones and Isabella Mastroeni. How to construct a general program obfuscator? We present a novel approach to automatically generating obfuscated code P’ from any program P with source clear code. Start with a (program-executing) interpreter Interp for the language in which P is written. Then “distort” Interp so it is still correct, but its specialization P’ with respect to P is transformed code that is equivalent to the original program, but harder to understand or analyze. Potency of the obfuscator is proved with respect to a general model of the attacker, modeled as an approximate (abstract) interpreter. A systematic approach to distortion is to make program P obscure by transforming it to P’ on which (abstract) interpretation is incomplete. Interpreter distortion can be done by making residual in the specialization process sufficiently many interpreter operations to defeat an attacker in extracting sensible information from transformed code. Our method is applied to: code flattening, data-type obfuscation, and opaque predicate insertion. The technique is language independent and can be exploited for designing obfuscating compilers.
Vincent Cheval (LSV, Cachan)
Place: B011
Title: A decision procedure for trace equivalence
We consider security properties of cryptographic protocols that can be modeled using the notion of trace equivalence. The notion of equivalence is crucial when specifying privacy- type properties, like anonymity, vote-privacy, and unlinkability. In this paper, we give a calculus that is close to the applied pi calculus and that allows one to capture most existing protocols that rely on classical cryptographic primitives. First, we propose a symbolic semantics for our calculus relying on constraint systems to represent infinite sets of possible traces, and we reduce the decidability of trace equivalence to deciding a notion of symbolic equivalence between sets of constraint systems. Second, we develop an algorithm allowing us to decide whether two sets of constraint systems are in symbolic equivalence or not. Altogether, this yields the first decidability result of trace equivalence for a general class of processes that may involve else branches and/or private channels (for a bounded number of sessions).
Alexander Shen (CNRS, LIRMM, Montpellier)
Date: Tuesday, October 18 – 14:00
Place: C005 (salle du conseil)
Title: Kolmogorov complexity and “no information” notion
There are many ways to formalize the intuitive idea of “a message that has no information” or “a message has full information” about something. One can use logic of knowledge, Shannon’s information theory, consider protocols (deterministic, non-deterministic, probabilistic etc.). We discuss these approaches and then switch to the algorithmic information theory approach and explain A. Muchnik’s result about (kind of) cryptography framework in algorithmic information theory.
Daniel Leivant (Indiana University)
Date: Friday, June 10 – 14:00
Place: C103
Title: Dynamic logic and its parents
Modal deductive formalisms about imperative programs, such as dynamic logic, can be seen as syntactic sugar for more explicit forms of reasoning, such as first order theories and second order logic. This view suggests notions of completeness for dynamic logic, which we argue are more informative and appropriate than the traditional notion of relative completeness.
Frederic Prost (LIG, Grenoble)
Date: Wednesday, May 11 – 15:30
Place: B013
Title: Politique de sécurité dynamique
La propriété de non-interférence qui consiste à prouver que deux parties d’un programme n’ont pas d’interactions entre elles, est la base de nombreuses analyses de programme : parallélisation, program slicing, élimination de code mort, etc. Dans le domaine de la confidentialité il s’agit de montrer qu’il n’y a pas de flux d’information du “privé” vers le “public”. Cette propriété est cependant trop stricte pour rendre compte des applications réelles : par exemple lors de la vérification d’un mot de passe il y a formellement une interaction entre le domaine privé (la valeur du mot de passe) et le domaine public (l’accès est autorisé ou non en regard du code entré). Cependant de telles fuites d’information sont considérées comme acceptables. Nous introduisons une notion de politique de confidentialité sous forme de règles de réécritures associées (on pourrait voir ça comme un typage plus fin) pour spécifier que de telles fuites sont “permises” et vérifier qu’un programme est bien correct vis-à-vis d’une politique de sécurité. De même après trois essais infructueux une carte banquaire est bloquée, nous montrons comment en ajoutant des actions aux règles définissants la politique de sécurité on peut modifier cette dernière au cours du calcul et comment vérifier que le programme reste valide vis-à-vis de sa politique de sécurité.
Vasileios Koutavas (Trinity College, Dublin))
Date: Tuesday, May 10 – 10:00
Place: A008
Title: An Introduction to Program Equivalence
Contextual equivalence is a relation between program terms which guarantees that related terms can replace eachother inside any arbitrary system, without changing the behaviour of the system. This relation reveals fundamental symmetries of program terms that abstract away from concrete syntax and illuminate the semantics of programming languages. Its implications span from programming language design to software engineering, to system security, to program verification. To understand contextual equivalence, and to develop reasoning principles for it, we need to develop a theory that can express the basic interractions between a program and its environment, and to express contextual equivalence within that theory. In this talk we will review the theories available today and then concentrate on thosed based on bisimulation, which we will examine for a variety of functional, imperative, and nondeterministic languages.
Marie de Roquemaurel (IRIT, Université de Toulouse III)
Date: Tuesday, April 5 – 14:00
Place: A006
Title: Assistance à la conception de modèles à l’aide de contraintes
Les systèmes informatiques embarqués deviennent omniprésents dans de multiples domaines et de plus en plus d’industriels cherchent à modéliser leurs problèmes réels, à vérifier la cohérence de leur modélisation, et à utiliser des outils performants pour les résoudre. Dans ce travail, nous considérons les problèmes modélisés avec les outils utilisés actuellement par la communauté de l’ingénierie dirigée par les modèles comme le framework EMF (Eclipse Modeling Framework) pour représenter le système informatique embarqué ou le langage de contraintes OCL (Object Constraint Language) pour traduire les propriétés métier. Pour vérifier ces propriétés de manière efficace, l’approche choisie consiste à transformer le problème en un problème de satisfaction de contraintes et d’utiliser les techniques efficaces d’Intelligence Artificielle pour le résoudre. Les premiers résultats expérimentaux sont prometteurs. Lors de cet exposé, je présenterai cette approche et nous verrons qu’elle présente l’avantage de pouvoir faire de la validation de contraintes, de compléter des modèles partiels tout en pouvant prendre en compte un critère à minimiser. Mots-clés : Conception Système, Ingénierie Dirigée par les Modèles, Transformation de Modèles, Synthèse, Contraintes
Gennara Parlato (CNRS, LIAFA, Paris)
Date: Tuesday, February 15 – 14:00
Place: A008
Title: Decidable Logics Combining Heap Structures and Data
We define a new logic, STRAND, that allows reasoning with heap-manipulating programs using deductive verification and SMT solvers. STRAND logic (“STRucture ANd Data” logic) formulas express constraints involving heap structures and the data they contain; they are defined over a class of pointer-structures R defined using MSO-defined relations over trees, and are of the form ∃x∀y.φ(x, y), where φ is a monadic second-order logic (MSO) formula with additional quantification that combines structural constraints as well as data-constraints, but where the data-constraints are only allowed to refer to x and y. The salient aspects of the logic are: (a) the logic is powerful, allowing existential and universal quantification over the nodes, and complex combinations of data and structural constraints; (b) checking Hoare-triples for linear blocks of statements with pre-conditions and post-conditions expressed as Boolean combinations of the existential and universal STRAND formulas reduces to satisfiability of a STRAND formula; (c) there are powerful decidable fragments of STRAND, one semantically defined and one syntactically defined, where the decision procedure works by combining the theory of MSO over trees and the quantifier-free theory of the underlying data-logic. We demonstrate the effectiveness and practicality of the logic by checking verification conditions generated in proving properties of several heap-manipulating programs, using a tool that combines an MSO decision procedure over trees (MONA) with an SMT solver for integer constraints (Z3).
Marco Gaboardi (Università di Bologna)
Date: Tuesday, February 8 – 14:00
Place: A008
Title: Linear Dependent Types for Certified Resource Consumption
I present the ideas underlying the design of an interactive programming framework useful to ensure the reliability and correctness of programs with respect to their resource consumption. The key ingredient of this framework is a type system, dlPCF, mixing ideas from linear logic and dependent types. I introduce dlPCF and I show that it is sound and relatively complete. Completeness holds in a strong sense: dlPCF is not only able to precisely capture the functional behaviour of programs (i.e. how the output relates to the input) but also some of their intensional properties, namely the complexity of their evaluation. Additionally, dlPCF is parametrized on the underlying language of index terms, which can be tuned so as to sacrifice completeness for tractability.
Steve Kremer (INRIA, LSV, Cachan)
Date: Tuesday, January 25 – 14:00
Place: A008
Title: Formal analysis of security protocols: the case of electronic voting
Security protocols are distributed programs which aim at establishing security guarantees, such as confidentiality, authentication or anonymity, while being executed over an untrusted network. Nowadays, security protocols are present in many aspects of our everyday life: electronic commerce, mobile phones, e-banking, etc. Symbolic methods, which often come with automated tool support, have shown extremely useful for finding structural flaws in such protocols or proving their absence. In this talk we report on our efforts carried out over the last years to apply such techniques to electronic voting protocols. The specificities of electronic voting have raised several challenges: they make use of esoteric cryptographic primitives and need to guarantee complex security properties, such as receipt-freeness or verifiability. We have modelled and analyzed such protocols and their properties in the applied pi calculus, a formal language for expressing security protocols. Formally defining the protocols and properties allowed us to highlight many (hidden) assumptions and has shown to be a non trivial task in itself. Regarding automated analysis, this line of work raised a number of challenges in formal analysis of security protocols, some of which are still the subject of active research.
Florian Horn (CNRS, LIAFA , Paris)
Date: Tuesday, January 18 – 14:00
Place: A008
Title: Solving Simple Stochastic Tail Games
Stochastic games are a natural model for open reactive processes: one player represents the controller and his opponent represents a hostile environment. The evolution of the system depends on the decisions of the players, supplemented by random transitions. There are two main algorithmic problems on such games: computing the values (quantitative analysis) and deciding whether a player can win with probability 1 (qualitative analysis). In this paper we reduce the quantitative analysis to the qualitative analysis: we provide an algorithm for computing values which uses qualitative analysis as a sub-procedure. The correctness proof of this algorithm reveals several nice properties of perfect-information stochastic tail games, in particular the existence of optimal strategies.