I am a researcher in computer science, currently working as a postdoctoral researcher at the Max Planck Institute for Software Systems in Saarbrücken, in Derek Dreyer's group. Although my interests are diverse, my main area of expertise is the application of semantic techniques to programming languages and proof assistants. I also make a point of backing my theoretical work with a substantial amount of implementation, for which I developed specific tools and libraries in OCaml.

Between September 2017 and December 2018, I was a postdoctoral researcher at Inria in the Deducteam project, hosted by the LSV at ENS Cachan (now ENS Paris-Saclay). During that time, I proposed a new implementation of the Dedukti logical framework called Lambdapi, which enables the development of mathematical proofs in the system, while the original Dedukti was only usable as the target of translations.

Between October 2014 and August 2017, I was a PhD student at the LAMA (LAboratoire de MAthématiques), in Chambéry (in the French Alps). During that time, I mainly worked with Christophe Raffalli and Pierre Hyvernat, although my official advisor was Karim Nour. In my thesis work, I designed the theoretical foundation of (and implemented) the PML₂ language, which mixes programming and program certification in a uniform (ML-like) setting.

Rodolphe Lepigre

Research

Since I joined Derek Dreyer's group at MPI-SWS, I have been spending a lot of time working with (and also on) Iris. Iris is a higher-order concurent separation logic, which rougly means that it is a modern form of Hoare logic. In particular, it can be used to mechanically verify tricky concurent programs inside the Coq proof assistant. One of my main projects so far (in collaboration with Ralf Jung, Gaurav Parthasarathy, Marianna Rapoport, Amin Timany, Derek Dreyer and Bart Jacobs) has been to integrate Abadi and Lamport's prophecy variables to the Hoare logic framework [JLPRTDJ2020]. Prophecy variables turn out to be a very useful mechanism when proving strong (logically atomic) specifications for concurent data structures like RDCSS or the Herlihy-Wing queue.

On a more personal side, my general research project revolves around the design and implementation of PML₂ [PML2], which is a programming language with support for program certification. The main idea is to extend an ML-like language, similar to OCaml or SML, with the means of specifying and proving equational properties of its own programs. We thus combine the flexibility of a full-fledged programming language with the great specification power of a proof assistant. Although the development of PML₂ is one of my leading goals, it is first and foremost an excuse for attacking difficult theoretical and practical questions. Of course, the answers to such questions go beyond the scope of PML₂, although I often present them in its light. In this sense, the implementation of the language can be seen as an experimentation platform for driving theoretical investigations. I strongly believe that mixing abstract theoretical questions with pragmatic implementation aspects mutually benefits to both fields. For example, the type system given in [Lepigre2016] could not be implemented in a natural way due to the fact that PML₂ is a so-called Curry-style language. As a result, we developed a new framework for dealing with such languages, including new techniques using subtyping, choice operators and circular proofs [LepRaf2019].

Since a previous postdoctoral research position at Inria, I am involved in an effort to transform the Dedukti logical framework into a full-fledged, interactive proof system. This new line of work around Dedukti was envisioned by Frédéric Blanqui, and it is now based on the new implementation of the λΠ-calculus that I initiated [Lambdapi]. This new implementation has several major advantages over the previous version of Dedukti. First and foremost, it provides a primitive notion of metavariable which is used for representing proof goals as well as omitted elements (e.g., implicit arguments). Moreover, the implementation relies on the Bindlib library [Bindlib], which allows for high-level manipulations of structures with bound variables without sacrificing efficiency.

Classical realizability and observational equivalence

To prove the correctness of the PML₂ system (logical consistency and runtime safety), I designed a theoretical framework based on Krivine's classical realisability [Lepigre2016]. This semantical model accounts for the specificities of the system, and in particular the notion of program equivalence that is used for specifying computational behaviours. I thus defined a relation of observational equivalence over terms, which can be naturally expressed in the classical realizability settings. Indeed, quantifying over all the evaluation contexts simply amounts to quantifying over all the stacks in a Krivine abstract machine. Another specificity of the model lies in its call-by-value nature, which yields an interpretation with three layers, against two in the usual (call-by-name) presentation of Krivine's classical realizability (this was already noticed by Guillaume Munch-Maccagnoni). Indeed, a type A is interpreted by a set of (fully-evaluated) values ⟦A⟧, a set of stack (evaluation contexts), and a set of terms ⟦A⟧⊥⊥ linked by an orthogonality relation. In some sense, this means that ⟦A⟧⊥⊥ is the completion of ⟦A⟧ with terms computing elements of ⟦A⟧. It is essential for these sets to be closed under observational equivalence.

Dependent functions, effects and value restriction

In PML₂, program properties are specified and proved by manipulating terms as the first-order objects of the type system. In particular, this gives a way of expressing properties that should hold for all terms. For example, we can write ∀n, ∀m, n+m ≡ m+n to express the commutativity of an addition function. However, the first-order quantification that is used here ranges over every possible values, without any type consideration. Whereas functions like addition are usually not defined on all values, but on a restricted domain like the natural numbers, and may crash when fed with something else. Hence, we often need to rely on a stronger, typed form of quantification, with which we can write ∀n∈ℕ, ∀m∈ℕ, n+m ≡ m+n. Such types correspond to a form of dependent function. For instance, the previous example is inhabited by functions taking two number x and y, and returning a proof of x+y ≡ y+x. In PML₂, working with such dependent function types is delicate because of the presence of effects. Indeed, as for polymorphism in ML, dependent functions require some restriction to preserve soundness. The usual value restriction used in ML also applies here, but it is not satisfactory. Indeed, it restricts the application of dependent function to values, which breaks the compositionality of the system.

To solve this issue, I proposed to relax value restriction using the idea that a term that is (observationally) equivalent to a value can be considered to be a value [Lepigre2016]. Although this so-called semantical value restriction is simple, it is extremely hard to justify in the model. Indeed, it requires an essential property relating the different levels of interpretation of types in a novel way. As mentioned previously, a type A is interpreted both as a set of values ⟦A⟧ and as a set of terms ⟦A⟧⊥⊥ defined as a completion of ⟦A⟧, which implies ⟦A⟧ ⊆ ⟦A⟧⊥⊥. The property that is required for justifying the semantical value restriction is the following: every value of ⟦A⟧⊥⊥ should already be in ⟦A⟧. In other words, the completion operation on sets of values should be closed for values. To obtain this property, I extended the programming language with a new instruction, which provides new tests for observational equivalence (see [Lepigre2016] and [Lepigre2017PhD]).

Type-checking and subtyping in Curry-style languages

Type checking (verifying that a given term inhabits a given type) and type inference (finding a type that is inhabited by a given term) tend to be undecidable in Curry-style languages like System F or PML₂. As a consequence, these system are sometimes considered impractical, although practicality and decidability are two different problems. The main issue with Curry-style languages is that their type systems are generally not syntax-directed, meaning that they cannot be easily implemented. In particular, there is no canonical way of deciding what typing rule should be applied first when attempting to prove a typing judgment. To solve this problem, we designed (with Christophe Raffalli) a framework based on subtyping [LepRaf2019]. The main, innovating idea is to use a ternary relation t ∈ A ⊆ B instead of the usual binary relation A ⊆ B. We interpret the former as the implication “if the term t has type A, then it also has type B”, while the latter is interpreted as the inclusion “every element of type A is an element of type B”. In the obtained system, only one typing rule applies for every term constructor, and only one subtyping rule applies for every pair of types (up to commutation). In particular, the connectives that do not have algorithmic contents (those that are not reflected in the syntax of the terms) are handled using subtyping exclusively. Such connectives include the quantifiers, but also the equality types of PML₂ and the least and greatest fixpoint constructors used by inductive and coinductive types.

Choice operators for a closed semantics

Our work on Curry-style languages [LepRaf2019] not only involves a new notion of subtyping, but also a new way of dealing with variables based on choice operators, inspired by Hilbert's Epsilon operator. With this new presentation, bound variables are systematically substituted by closed symbols playing the role of witnesses for semantical properties. As a consequence, terms and types remain closed throughout the typing and subtyping rules, and the usual typing contexts are not requires anymore. This presentation has several advantages, the first of which being a simplification of the semantics. Indeed, free variables are usually handled (in realizability models) using so-called valuations, assigning them a semantic value. In our presentation, such maps are not required since all the necessary information is carried by the symbolic witnesses that are substituted to free variables. As an indirect consequence of this technique, structural rules such as weakening become completely transparent, and implementation only requires first-order unification. Finally, the elimination of contexts facilitates the construction of circular proofs (see below).

Circular proofs and termination checking

As mentioned earlier, the framework based on subtyping developed in [LepRaf2019] is compatible with inductive and coinductive types. They are added to the syntax of types in the form of least and greatest fixpoint operators, which are annotated by ordinals to form sized-types. To handle these constructors in a sound way, we introduce a notion of circular proof with a related well-foundedness criterion. In particular, the subtyping rules that are given for the fixed points induce a form of infinite unfolding. This is due to the fact that we work with symbolic ordinals that are not given a concrete value, and it is thus impossible to know when the zero ordinal will be reached in a decreasing sequence. However, assuming that the proof is well-founded, we know that it will indeed be reached after finitely many unfolding steps. To keep the proof representation finite, we introduce circularity in our proofs, and check that they are well-formed using the size-change principle of Lee, Jones and Ben Amram. The notion of circular proof that we consider is in fact very general, and can also be applied to typing rules in our context. This allows us to type recursive programs in a very simple way, while proving their termination. The normalisation proof is obtained using standard reducibility candidates (or realizability) techniques. Indeed, as the structure of our circular proofs is well-founded, it is still possible to reason by (well-founded) induction on the structure of our typing (or subtyping) derivations.

Publications (also on dblp)

VIP: Verifying Real-World C Idioms with Integer-Pointer Casts [LSMKDS2022]
Rodolphe Lepigre, Michael Sammler, Kayvan Memarian, Robbert Krebbers, Derek Dreyer, Peter Sewell
POPL 2022 (accepted for publication)

Abstract. Systems code often requires fine-grained control over memory layout and pointers, expressed using low-level (e.g., bitwise) operations on pointer values. Since these operations go beyond what basic pointer arithmetic in C allows, they are performed with the help of integer-pointer casts. Prior work has explored increasingly realistic memory object models for C that account for the desired semantics of integer-pointer casts while also being sound w.r.t. compiler optimisations, culminating in PNVI, the preferred memory object model in ongoing discussions within the ISO WG14 C standards committee. However, its complexity makes it an unappealing target for verification, and no tools currently exist to verify C programs under PNVI.
In this paper, we introduce VIP, a new memory object model aimed at supporting C verification. VIP sidesteps the complexities of PNVI with a simple but effective idea: a new construct that lets programmers express the intended provenances of integer-pointer casts explicitly. At the same time, we prove VIP compatible with PNVI, thus enabling verification on top of VIP to benefit from PNVI's validation with respect to practice. In particular, we build a verification tool, RefinedC-VIP, for verifying programs under VIP semantics. As the name suggests, RefinedC-VIP extends the recently developed RefinedC tool, which is automated yet also produces foundational proofs in Coq. We evaluate RefinedC-VIP on a range of systems-code idioms, and validate VIP's expressiveness via an implementation in the Cerberus C semantics.

Abstract PDF RefinedC (git repo) Artifact and appendix Video (POPL talk)

RefinedC: Automating the Foundational Verification of C Code with Refined Ownership Types [SLKMDG2021]
Michael Sammler, Rodolphe Lepigre, Robbert Krebbers, Kayvan Memarian, Derek Dreyer, Deepak Garg
PLDI 2021
Recipient of a PLDI 2021 Distinguished Paper Award.
Recipient of the PLDI 2021 Distinguished Artifact Award.

Abstract. Given the central role that C continues to play in systems software, and the difficulty of writing safe and correct C code, it remains a grand challenge to develop effective formal methods for verifying C programs. In this paper, we propose a new approach to this problem: a type system we call RefinedC, which combines ownership types (for modular reasoning about shared state and concurrency) with refinement types (for encoding precise invariants on C data types and Hoare-style specifications for C functions).
RefinedC is both automated (requiring minimal user intervention) and foundational (producing a proof of program correctness in Coq), while at the same time handling a range of low-level programming idioms such as pointer arithmetic. In particular, following the approach of RustBelt, the soundness of the RefinedC type system is justified semantically by interpretation into the Coq-based Iris framework for higher-order concurrent separation logic. However, the typing rules of RefinedC are also designed to be encodable in a new “separation logic programming” language we call Lithium. By restricting to a carefully chosen (yet still expressive) fragment of separation logic, Lithium supports predictable, automatic, goal-directed proof search without backtracking. We demonstrate the effectiveness of RefinedC on a range of representative examples of C code.

Abstract PDF On DBLP Bibtex Webpage RefinedC (git repo) Appendix Artifact

The Future is Ours: Prophecy Variables in Separation Logic [JLPRTDJ2020]
Ralf Jung, Rodolphe Lepigre, Gaurav Parthasarathy, Marianna Rapoport, Amin Timany, Derek Dreyer, Bart Jacobs
PACMPL 4(POPL): 45:1-45:32 (2020)

Abstract. Early in the development of Hoare logic, Owicki and Gries introduced auxiliary variables as a way of encoding information about the history of a program's execution that is useful for verifying its correctness. Over a decade later, Abadi and Lamport observed that it is sometimes also necessary to know in advance what a program will do in the future. To address this need, they proposed prophecy variables, originally as a proof technique for refinement mappings between state machines. However, despite the fact that prophecy variables are a clearly useful reasoning mechanism, there is (surprisingly) almost no work that attempts to integrate them into Hoare logic. In this paper, we present the first account of prophecy variables in a Hoare-style program logic that is flexible enough to verify logical atomicity (a relative of linearizability) for classic examples from the concurrency literature like RDCSS and the Herlihy-Wing queue. Our account is formalized in the Iris framework for separation logic in Coq. It makes essential use of ownership to encode the exclusive right to resolve a prophecy, which in turn enables us to enforce soundness of prophecies with a very simple set of proof rules.

Abstract PDF On DBLP Bibtex Webpage Artifact Video (POPL talk)

Unboxing Mutually Recursive Type Definitions in OCaml [CLS2019]
Simon Colin, Rodolphe Lepigre, Gabriel Scherer
Proceedings of JFLA, Les Rousses, France, 30th January to 2nd February 2019

Abstract. In modern OCaml, single-argument datatype declarations (variants with a single constructor, records with a single field) can sometimes be “unboxed”. This means that their memory representation is the same as their single argument (omitting the variant or record constructor and an indirection), thus achieving better time and memory efficiency. However, in the case of generalized/guarded algebraic datatypes (GADTs), unboxing is not always possible due to a subtle fact about the runtime representation of OCaml values. The current correctness check is incomplete, rejecting many valid definitions, in particular those involving mutually-recursive datatype declarations. In this paper, we explain the notion of separability as a semantic for the unboxing criterion, and propose a set of inference rules to check separability. From these inference rules, we derive a new implementation of the unboxing check that properly supports mutually-recursive definitions.

Abstract PDF

Practical Subtyping for Curry-Style Languages [LepRaf2019]
Rodolphe Lepigre, Christophe Raffalli
ACM Trans. Program. Lang. Syst. 41(1): 5:1-5:58 (2019)

Abstract. We present a new, syntax-directed framework for Curry style type-systems with subtyping. It supports a rich set of features, and allows for a reasonably simple theory and implementation. The system we consider has sum and product types, universal and existential quantifiers, inductive and coinductive types. The latter two may carry size invariants that can be used to establish the termination of recursive programs. For example, the termination of quicksort can be derived by showing that partitioning a list does not increase its size. The system deals with complex programs involving mixed induction and coinduction, or even mixed polymorphism and (co-)induction. One of the key ideas is to separate the notion of size from recursion. We do not check the termination of programs directly, but rather show that their (circular) typing proofs are well-founded. Termination is then obtained using a standard (semantic) normalisation proof. To demonstrate the practicality of the system, we provide an implementation accepting all the examples discussed in the paper.

Abstract PDF On DBLP Bibtex Official link [SubML] prototype language

Abstract Representation of Binders in OCaml using the Bindlib Library [LepRaf2018]
Rodolphe Lepigre, Christophe Raffalli
LFMTP@FSCD 2018: 42-56

Abstract. The Bindlib library for OCaml provides a set of tools for the manipulation of data structures with variable binding. It is very well suited for the representation of abstract syntax trees, and has already been used for the implementation of half a dozen languages and proof assistants (including a new version of the logical framework Dedukti). Bindlib is optimised for fast substitution, and it supports variable renaming. Since the representation of binders is based on higher-order abstract syntax, variable capture cannot arise during substitution. As a consequence, variable names are not updated at substitution time. They can however be explicitly recomputed to avoid “visual capture” (i.e., distinct variables with the same apparent name) when a data structure is displayed.

Abstract PDF On DBLP Bibtex Official link [Bindlib] library

Semantics and Implementation of an Extension of ML for Proving Programs [Lepigre2017PhD]
Rodolphe Lepigre
PhD thesis, Université Savoie Mont Blanc

Abstract. In recent years, proof assistant have reached an impressive level of maturity. They have led to the certification of complex programs such as compilers and operating systems. Yet, using a proof assistant requires highly specialised skills and it remains very different from standard programming. To bridge this gap, we aim at designing an ML-style programming language with support for proofs of programs, combining in a single tool the flexibility of ML and the fine specification features of a proof assistant. In other words, the system should be suitable both for programming (in the strongly-typed, functional sense) and for gradually increasing the level of guarantees met by programs, on a by-need basis.
We thus define and study a call-by-value language whose type system extends higher-order logic with an equality type over untyped programs, a dependent function type, classical logic and subtyping. The combination of call-by-value evaluation, dependent functions and classical logic is known to raise consistency issues. To ensure the correctness of the system (logical consistency and runtime safety), we design a theoretical framework based on Krivine's classical realizability. The construction of the model relies on an essential property linking the different levels of interpretation of types in a novel way.
We finally demonstrate the expressive power of our system using our prototype implementation, by proving properties of standard programs like the map function on lists or the insertion sort.

Abstract PDF On DBLP Bibtex Official link [PML2] prototype language

PML₂: Integrated Program Verification in ML [Lepigre2017]
Rodolphe Lepigre
TYPES 2017: 4:1-4:27

Abstract. We present the PML₂ language, which provides a uniform environment for programming, and for proving properties of programs in an ML-like setting. The language is Curry-style and call-by-value, it provides a control operator (interpreted in terms of classical logic), it supports general recursion and a very general form of (implicit, non-coercive) subtyping. In the system, equational properties of programs are expressed using two new type formers, and they are proved by constructing terminating programs. Although proofs rely heavily on equational reasoning, equalities are exclusively managed by the type-checker. This means that the user only has to choose which equality to use, and not where to use it, as is usually done in mathematical proofs. In the system, writing proofs mostly amounts to applying lemmas (possibly recursive function calls), and to perform case analyses (pattern matchings).

Abstract PDF On DBLP Bibtex Official link [PML2] prototype language

A Classical Realizability Model for a Semantical Value Restriction [Lepigre2016]
Rodolphe Lepigre
ESOP 2016: 476-502

Abstract. We present a new type system with support for proofs of programs in a call-by-value language with control operators. The proof mechanism relies on observational equivalence of (untyped) programs. It appears in two type constructors, which are used for specifying program properties and for encoding dependent products. The main challenge arises from the lack of expressiveness of dependent products due to the value restriction. To circumvent this limitation we relax the syntactic restriction and only require equivalence to a value. The consistency of the system is obtained semantically by constructing a classical realizability model in three layers (values, stacks and terms).

Abstract PDF On DBLP Bibtex Official link

Mêler combinateurs, continuations et EBNF pour une analyse syntaxique efficace en OCaml (version longue) [LepRaf2015]
Rodolphe Lepigre, Christophe Raffalli
JFLA 2015 (short paper)

Abstract. Le cœur de DeCaP, objet de cet article, est une bibliothèque de combinateurs d'analyseurs syntaxiques. Ils sont la cible de la traduction d'une syntaxe EBNF, sans récursion à gauche, que l'on a ajoutée à OCaml. Les parseurs ainsi définis sont des expressions de première classe.
Pour plus d'efficacité, nos combinateurs utilisent des continuations et inspectent l'ensemble des premiers caractères acceptés par une grammaire afin d'élaguer l'arbre des possibilités. Les continuations donnent naturellement la sémantique complète d'EBNF et DeCaP peut donc gérer les grammaires ambigues. De plus, des combinateurs inspirés de la notion de continuation délimitée permettent d'optimiser certaines grammaires en restreignant la sémantique. Notre parseur d’OCaml est en moyenne deux fois plus rapide que celui de Camlp4 et cinq fois plus lent que l'original.
DeCaP fournit également un système de quotation et d'anti-quotation similaire à celui de Camlp4 et permet ainsi d'étendre la syntaxe d'OCaml. Notre outil se veut plus simple et moins contraignant que ce dernier et n'impose, par exemple, aucune analyse lexicale.

Abstract PDF

A Classical Realizability Interpretation of Judgement Testing [Lepigre2013M2]
Rodolphe Lepigre
Masters thesis, Université Savoie Mont Blanc

Abstract. A notion of test for intuitionistic type theory has recently been introduced by Peter Dybjer and his collaborators. It is meant to be the foundation for automatic testing tools that could be implemented in proof assistants such as Coq or Agda. Such tools would provide a way to test, at any time during the construction of a proof, if the current goal can be completed in the context. The failure of such a test would mean that the goal is impossible to prove, and its success would corroborate the partial result.
In this report, we investigate the possibility of extending the testing procedure to classical systems. We propose an interpretation of the testing procedure in terms of Krivine's classical realizability. Finally we show that the notion of test is correct, in the sense that a judgement is valid if and only if it passes every possible test.

Abstract PDF

Testing judgements of type theory [Lepigre2012M1]
Rodolphe Lepigre
Internship report, Chalmers Tekniska Hőgskola

Abstract. We investigate a testing framework for type theory. We first describe the Krivine Abstract Machine (KAM), and the Testing KAM (TKAM) which is a modified version of the KAM that allows the testing of terms of the PCF language. This follows notes by Pierre Clairambault. We use two versions of the TKAM. The first one is restricted to the Finite System T language and the second one has lazy natural numbers. We then use the virtual machines as the central part of a testing procedure for the typing of terms. This work is an implementation of certain aspects of the testing manual for intuitionistic type theory described in Peter Dybjer's paper “Program Testing and Constructive Validity”.

Abstract PDF

Software

My work on the design of type systems for programming languages and proof assistants has resulted in a relatively large amount of implementation work. The most important projects are listed in the following section, and include PML₂ (which is described in [Lepigre2017] and [Lepigre2017PhD]), SubML (which is described in [LepRaf2019]) but also a more recent work on a new implementation and extension of the Dedukti logical framework. The implementation of such systems has also directed me toward the development of specific tools and software libraries providing convenient abstractions. For instance, I take part in the development of the Bindlib library [LepRaf2018], that was initiated by Christophe Raffalli, and that provides an abstract representation for binders (which are very common in languages and proofs systems). We also developed (with Christophe Raffalli) a system for writing parsers called Earley, that is integrated to OCaml using a BNF-like syntax extension.

The SubML language [SubML]

The SubML language implements the type system presented in [LepRaf2019]. Its many features include subtyping, inductive and coinductive types, polymorphism, existential types, sized types and a termination checker. This is joint work with Christophe Raffalli.

Git repository Online demo

The PML₂ language [PML2]

The PML₂ language implements the system presented in [Lepigre2017PhD]. It can be seen as an extension of SubML with proof of programs, classical logic and higher-order. Examples of programs and proofs of programs are given in [Lepigre2017PhD] and [Lepigre2017]. I was the only author of PML₂ (up to minor fixes and examples) until the PhD thesis version (see below). Christophe Raffalli contributes to the development of the language since September 2017.

Git repository Thesis version

The Lambdapi language [Lambdapi]

Lambdapi is an implementation of the λΠ-calculus modulo rewriting similar to Dedukti (Dowek et al.), but based on the Bindlib library (see below). It is intended to become the new implementation of Dedukti in the near future. The code is shorter by half, well documented, and first experiments indicate a small gain in terms of performances.

Git repository

The OCaml Bindlib library for bound variables [Bindlib]

Bindlib is an OCaml library providing tools for the manipulation of data structures with bound variables (e.g., λ-calculus, quantified formulas). It allows for efficient substitution and supports variable renaming. The project was initiated by Christophe Raffalli. I contributed several simplifications, improvements, new features, and most of the current documentation.

Git repository [LepRaf2018]

Earley parser combinator library for OCaml [Earley]

Earley is a parser combinator library for the OCaml language. It relies on Earley's parsing algorithm and it is intended to be used in conjunction with a proprocessor, which provides an OCaml syntax extension for allowing the definition of parsers inside the OCaml source code. This is joint work with Christophe Raffalli, following [LepRaf2015] in which a different parsing technology was used.

Git repository

Timed references for imperative state in OCaml [Timed]

The Timed library allows the encapsulation of reference updates in an abstract notion of state. It can be used to emulate a pure interface while working with (imperative) references. This is joint work with Christophe Raffalli.

Git repository

Imagelib library for OCaml [Imagelib]

The Imagelib library implements image formats without relying on any C library, which makes it suitable for compilation using js_of_ocaml. Supported image formats are PNG (RCF 2083), PPM, PGM, and PBM. The formats JPG, GIF and XCF are only partially supported.

Git repository

Patoline, a modern typesetting system in OCaml [Patoline]

Patoline aims at providing an alternative to TeX-based systems by relying on a high-level programming language: OCaml. The project was initiated by Pierre-Étienne Meunier, Christophe Raffalli and Tom Hirschowitz. I joined the project in 2013 and became one of its main contributors. I notably used Patoline to produce my thesis [Lepigre2017PhD].

Website Git repository

Talks and paper presentations

Date and location Title and kind of talk Slides and videos
21/01/2022
Philadelphia, USA (online)
VIP: Verifying Real-World C Idioms with Integer-Pointer Casts
Paper presentation at the POPL conference
Slides Video
22/01/2020
New Orleans, USA
The Future is Ours - Prophecy Variables in Separation Logic
Paper presentation at the POPL conference
Slides Video
28/10/2019
Aarhus, Denmark
Prophecy Variables in Separation Logic (Extending Iris with Prophecy Variables)
Contributed talk at the Iris workshop
Slides (pdf)
20/09/2019
Marseille, France
A Practical Framework for Curry-Style Languages (Inspired by Realizability Semantics)
Invited talk at the Realizability Workshop (CIRM)
Slides (pdf)
16/09/2019
Cachan, France
Prophecy Variables in Separation Logic (Extending Iris with Prophecy Variables)
Deducteam seminar (ENS de Cachan)
Slides (pdf)
11/12/2018
Nantes, France
The PML₂ Language: Integrated Program Verification in ML
Gallinette seminar (Inria)
Slides (pdf)
03/12/2018
Paris, France
The PML₂ Language: Integrated Program Verification in ML
Prosecco seminar (Inria)
Slides (pdf)
29/11/2018
Saarbrücken, Germany
The PML₂ Language: Integrated Program Verification in ML
Postdoc inverview talk at the Max Planck Institute for Software Systems
Slides (pdf)
27/11/2018
Saclay, Orsay
An Overview of the PML₂ Language: Realizability, Subtyping and Cyclic Proofs
Invited talk at the “journées inaugurales du GT Scalp” (LRI)
Slides (pdf)
01/10/2018
Paris, France
Abstract Representation of Binders using the Bindlib Library
Gallium seminar (Inria Paris)
Slides (pdf)
20/09/2018
Cachan, France
Termination checking using well-founded typing derivations
Deducteam seminar (ENS de Cachan)
Slides (utf-8)
07/07/2018
Oxford, UK
Representation of Binders Using the Bindlib (OCaml) Library
Paper presentation at the LFMTP workshop
Slides (pdf)
13/06/2018
Marseille, France
The PML Language: Realizability at the Service of Program Proofs
Invited talk at the Realizability Workshop (CIRM)
Slides (pdf)
08/03/2018
Paris, France
The PML₂ Language: Proving Programs in ML
Gallium seminar (Inria Paris)
Slides (pdf)
17/01/2018
Paris, France
Practical Curry-Style using Choice Operators and Local Subtyping
Type theory and realizability seminar (IRIF)
Slides (pdf)
22/09/2017
Villetaneuse, France
Circular Proofs for Subtyping and Termination
Seminar of the LCR team (LIPN)
Slides (pdf)
01/06/2017
Budapest, Hungary
Theory and Demo of PML2: Proving Programs in ML
Contributed talk at the TYPES conference
Slides (pdf)
22/02/2017
Saclay, France
PML2, Semantical Value Restriction & Pointed Subtyping
Visit to the Parsifal Team (Inria Saclay)
Slides (pdf)
16/02/2017
Marseille, France
Proofs of programs and subtyping in PML2
Séminaire Logique et Interactions (I2M)
Slides (pdf)
07/04/2016
Eindhoven, Netherlands
A Classical Realizability Model for a Semantical Value Restriction
Paper presentation at the ESOP conference
Slides (pdf)
26/02/2016
Cambridge, UK
A call-by-value realizability model for PML
Logic and Semantics Seminar (Computer Laboratory)
Slides (pdf)
03/12/2015
Lyon, France
A call-by-value realizability model for PML
Séminaire CHoCoLa (ENS de Lyon)
Slides (pdf)
08/01/2015
Val d'Ajol, France
Mêler combinateurs et BNF pour l'analyse syntaxique en OCaml
Paper presentation at the JFLA workshop
Slides (pdf)
03/12/2014
Montevideo, Uruguay
Toward an Adequation Lemma for PML2
Visit to the Equipo de Lógica, Facultad de Ingeniería, Udelar
Slides (pdf)
13/04/2014
Grenoble, France
Realizability, Testing and Game Semantics
Contributed talk at the GaLoP workshop
Slides (pdf)