+49 631 93038529 |
Paul-Ehrlich-Str. 26
Room 316 67663 Kaiserslautern Germany |
I have moved to the Technical University of Dortmund (Germany) and the associated Center for Trustworthy Data Science and Security, where I am now professor for Verification and Formal Guarantees of Machine Learning.
Link to my new website: https://www.rc-trust.ai/neider
I am a research group leader at the Max Planck Institute for Software Systems in Kaiserslautern and teach at the University of Kaiserslautern (see below a list of course I offer). Moreover, I am principal investigator in several research projects.
My research interests broadly lie in the intersection of machine learning and formal methods. I am especially interested in combining inductive techniques from the area of machine learning and symbolic techniques from the area of logic. The goal of my group is to build automated tools for the design, the construction, and the analysis of intelligent systems.
My group is working on a number of topics, including:
Our projects often extend into theoretical computer science, specifically to automata theory, game theory, and logic.
You can try some of the tools we have developed online:
As a byproduct of my research, I have developed and contributed to a number of tools:
You can try some of our tools online:
Horn-ICE Learning for Synthesizing Invariants and Contracts
Invariant Synthesis for Incomplete Verification Engines
I am principal investigator in the following projects.
The DAAD-supported project LeaRNNify is at the interface of formal methods and artificial intelligence. Its aim is to bring together two different kinds of algorithmic learning, namely grammatical inference and learning of neural networks.
More precisely, we promote the use of recurrent neural networks (RNNs) in the process of verifying reactive systems, which until now has been reserved for grammatical inference. On the other hand, grammatical inference is finding its way into the field of classical machine learning. In fact, our second goal is to use automata-learning techniques to enhance the verification, explainability, and interpretability of machine-learning algorithms and, in particular, RNNs.
Also visit our project website.
The goal of the DFG-funded project Temporal Logic Sketching is to develop computer-aided methods to assist engineers in writing formal specifications. To this end, the project combines inductive and deductive techniques from the areas of machine learning, artificial intelligence, and logic. As a byproduct, the project also investigates explainable machine learning, specifically learning of human-interpretable models.
I am supervising Bachelor and Master theses centering around machine learning in verification and synthesis. Should you be looking for a thesis or project in these areas, please contact me for further information.
See my DBLP and Google Scholar entries for a complete list.