The program will be composed of 6 tutorials, each of 2:30h duration,
and 5 technical talks, each of 1:30h duration.
Tutorials
Technical talks
Abstracts
Stochastic games Krishnendu Chatterjee (IST Austria)
Game theory is the study of interaction between agents in the rich
spectrum of relationships ranging between conflict and
cooperation. The study of game theory was originally proposed as the
mathematical foundation for behavior of rational agents in economics,
and over the last few decades has proved its usefulness by providing
new techniques and insights in logic and set theory, auction design
and implementation, the design and study of the internet,
verification, and biology.
Dynamic games are used to model competitive processes evolving over
time. Stochastic transitions are used for abstraction in modeling or
to formalize inherent uncertainty, leading to quantitative statistical
analysis. Stochastic games are dynamic games with stochastic
transitions. They have a wide range of applications including
economics, cell, population and evolutionary biology, queueing theory
and performance evaluation, and quantitative temporal model checking.
In this tutorial we explore the spectrum of stochastic game models
ranging from Markov chains, to Markov decision processes, to 2-player
perfect-information stochastic games. We will focus on the application
of stochastic games in verification, and consider omega-regular
objectives encompassing safety, liveness and parity objectives used in
temporal logic model checking. The tutorial will describe some of the
key techniques in analysis of stochastic games.
References
- Krishnendu Chatterjee and Thomas A. Henzinger. A Survey of
Stochastic Omega-regular Games. Journal of Computer and System
Sciences, to
appear. Available online
- Jerzy Filar and Koos Vrieze. Competitive Markov Decision Processes.
Springer, 1996
Verification of security protocols Véronique Cortier
(LORIA, CNRS, Nancy, France)
Security protocols are short programs aiming at securing
communications over a network. They are widely used in our everyday
life. They may achieve various goals depending on the application:
confidentiality, authenticity, privacy, anonymity, fairness,
etc. Their verification using symbolic models has shown its interest
for detecting attacks and proving security properties. A famous
example is the Needham-Schroeder protocol [7] on which G. Lowe
discovered a flaw 17 years after its publication [5]. Secrecy
preservation has been proved to be co-NP-complete for a bounded
number of sessions [8], and decidable for an unbounded number of
sessions under some additional restrictions (e.g. [1, 3, 4, 9]). Many
tools have also been developed to automatically verify cryptographic
protocols like [6, 2].
In this tutorial, we first overview several techniques used for
symbolically verifying security protocols that have led to the design
of many efficient and useful tools. Various formal models have been
proposed for representing security protocols. They all have in common
that messages are represented by terms, preserving the structure of
messages but abstracting away all implementations details of the
functions such as encryption, signatures or Exclusive Or. We will see
how the analysis of security protocols then reduces to solving
constraint systems or resolving (fragment of) Horn clauses.
However, the guarantees that symbolic approaches offer have been quite
unclear compared to the computational approach that considers issues
of complexity and probability. This later approach captures a strong
notion of security, guaranteed against all probabilistic
polynomial-time attacks. In a second part of the tutorial, we present
recent results that aim at obtaining the best of both worlds: fully
automated proofs and strong, clear security guarantees. The approach
consists in proving that symbolic models are sound with respect to
computational ones, that is, that any potential attack is indeed
captured at the symbolic level.
References
- R. Amadio and W. Charatonik. On name generation and set-based
analysis in the Dolev-Yao model. In Proc. of the 13th International
Conference on Concurrency Theory (CONCUR'02), LNCS, pages
499-514. Springer Verlag, 2002.
- B. Blanchet. An efficient cryptographic protocol verifier based on
prolog rules. In Proc. of the 14th Computer Security Foundations
Workshop (CSFW'01). IEEE Computer Society Press, June 2001.
- B. Blanchet and A. Podelski. Verification of cryptographic
protocols: Tagging enforces termination. In A. Gordon, editor,
Foundations of Software Science and Computation Structures
(FoSSaCS'03), volume 2620 of LNCS, April 2003.
- H. Comon-Lundh and V. Cortier. New decidability results for fragments
of first-order logic and application to cryptographic protocols. In Proc. of
the 14th Int. Conf. on Rewriting Techniques and Applications (RTA'2003),
volume 2706 of LNCS, pages 148-164. Springer-Verlag, 2003.
- G. Lowe. Breaking and fixing the Needham-Schroeder public-key protocol
using FDR. In T. Margaria and B. Steffen, editors, Tools and Algorithms
for the Construction and Analysis of Systems (TACAS'96), volume 1055 of
LNCS, pages 147-166. Springer-Verlag, march 1996.
- G. Lowe. Casper: A compiler for the analysis of security
protocols. In Proc. of 10th Computer Security Foundations Workshop
(CSFW'97), Rockport, Massachusetts, USA, 1997. IEEE Computer Society
Press. Also in Journal of Computer Security, Volume 6, pages 53-84,
1998.
- R. Needham and M. Schroeder. Using encryption for authentication in large
networks of computers. Communication of the ACM, 21(12):993-999, 1978.
- M. Rusinowitch and M. Turuani. Protocol insecurity with finite number of
sessions and composed keys is NP-complete. Theoretical Computer Science,
299:451-475, 2003.
- H. Seidl and K. N. Verma. Flat and one-variable clauses: Complexity of
verifying cryptographic protocols with single blind copying. In Logic for
Programming and Automated Reasoning (LPAR'04), LNCS, pages 79-94.
Springer-Verlag, 2005.
Testing and model generation
Bengt Jonsson (Uppsala University, Sweden)
Model-based techniques for verification and validation of reactive
systems, including model checking and model-based testing, have
witnessed drastic advances in the last decades. They require formal
models that specify the intended behavior of system components, which
ideally should be developed during specification and design. However,
constructing models typically requires significant manual effort,
implying that in practice often models are not available, or become
outdated as a system evolves. Automated support for generating such
models would therefore be very useful.
In this tutorial, we will cover techniques for constructing models of
components in reactive systems from observations of their external
behavior, i.e., using a black-box approach to the generation of
models. This can be done using techniques from automata learning
(aka. regular inference). We will cover existing approaches to
automata learning, including the assumptions they make on the
component that is being investigated. There are close connections
between automata learning and the problem of conformance testing for
finite automata, which will also be covered. Thereafter we will
consider approaches towards extending learning techniques towards
enriched automata formalisms, including subclasses of timed automata.
References
- T. Berg, O. Grinchtein, B. Jonsson, M. Leucker, H. Raffelt, and
B. Steffen.On the correspondence between conformance testing and
regular inference. In FASE, volume 3442 of Lecture Notes in Computer
Science, pages 175-189, 2005.
- M. Broy, B. Jonsson, J.-P. Katoen, M. Leucker, and A. Pretschner,
editors. Model-Based Testing of Reactive Systems, volume 3472 of
Lecture Notes in Computer Science. Springer Verlag, 2004.
- D. Lee and M. Yannakakis. Principles and methods of testing finite
state machines - a survey. Proc. IEEE, 84(8):10901126, 1996.
Model checking probabilistic systems Joost-Pieter Katoen (RWTH
Aachen, Germany)
This lecture will provide an introduction to the verification of CTMCs,
a model that combines discrete probabilistic branching with random state
residence times. CTMCs are prominent in performance and dependability
evaluation, occur as semantic model of high-level modeling formalisms
such as stochastic Petri nets and process algebras, and are frequently
used in systems biology. We will introduce a branching-time logic on
CTMCs, and explain in detail how the validity of these logical formulas
can be model-checked on finite CTMCs. In order to handle large, or even
infinite CTMCs, we introduce an abstraction technique that fits within
the realm of three-valued abstraction methods. The key ingredients of
this technique are a partitioning of the state space combined with an
abstraction of transition probabilities by intervals. We will present
the underlying theory of this abstraction, some examples, and indicate
how such abstraction can be applied in a compositional manner. Finally,
we show how CTMCs can be verified against linear real-time properties
given as deterministic timed automata.
References
- Christel Baier, Boudewijn R. Haverkort, Holger Hermanns, and Joost-Pieter
Katoen. Model-checking algorithms for continuous-time Markov chains. In
IEEE Transactions on Software Engineering, 29(6):524-541, 2003.
- Joost-Pieter Katoen, Daniel Klink, Martin Leucker, and Verena
Wolf. Abstraction for Stochastic Systems by Erlangs Method of
Stages. In 19th International Conference on Concurrency Theory
(CONCUR08). pages 279-294. Volume 5201 of LNCS. Springer, 2008.
- Taolue Chen, Tingting Han, Joost-Pieter Katoen, and Alexandru Mereacre.
Quantitative Model Checking of Continuous-Time Markov Chains Against
Timed Automata Specifications. In IEEE Symposium on Logic in Computer
Science (LICS). IEEE CS Press, 2009.
Programming languages for biology Andrew Phillips (Microsoft
Research Cambridge, UK)
This tutorial presents programming languages for three separate
application areas in biology: DNA Computing, Synthetic Biology and
Immunology. First, a programming language for designing molecular
devices in DNA is presented. The language allows a range of molecular
devices to be designed and simulated on a computer, and then compiled
to DNA sequences. In future, such languages could be used to design a
universal computer made solely of DNA. Second, a programming language
for engineering living cells is presented. The language allows a range
of genetic devices to be designed on a computer and compiled to
genetic parts. Given a computational model and an extensive library of
genetic parts, a compiler automatically selects the parts that satisfy
the design constraints, allowing cells to be programmed more
efficiently and reliably. In future, such languages could be used to
program cells to address societal challenges in areas of food,
medicine, energy and the environment. Third, a programming language
for modelling existing biological systems is presented. The language
is used to model some of the basic processes of viral detection in
living cells, and to make preliminary predictions about how the immune
system can be reprogrammed to improve its response to a given
virus. In future, such languages could be used to program
computational models of the immune system, in order to understand how
it works and how to improve its ability to combat disease. The
tutorial also describes basic principles of model simulation that are
common to all three languages, and discusses how these three rather
different languages can be unified under a common modelling engine.
Timed systems James Worrell (Oxford University, UK)
This talk will outline known results and open problems concerning
timed and hybrid automata and real-time logics, focusing on complexity
theory and expressiveness.
We will recall classical results in the theory of timed and hybrid
automata including the complexity of reachability, non-closure under
complement, undecidability of language equivalence and the complexity
of timed games. In addition we will consider more recent results on
decidable sub-classes of automata.
Concerning logics we will discuss the decidability and complexity of
real-time model checking and the relative expressiveness of temporal
and monadic predicate logics over the reals.
References
- R. Alur and P. Madhusudan. Decision Problems for Timed Automata:
A Survey. Formal Methods for the Design of Real-Time Systems. Springer
LNCS, volume 3185, 2004.
-
Y. Hirshfeld and A. Rabinovich. Timer formulas and decidable metric
temporal logic. Information and Computation, volume 198(2):148--178,
2005.
Compositional shape analysis Dino Distefano (Queen Mary
University of London, UK)
This talk describes a compositional shape analysis, where each
procedure is analyzed independently of its callers. The analysis uses
an abstract domain based on a restricted fragment of separation logic,
and assigns a collection of Hoare triples to each procedure; the
triples provide an over-approximation of data structure usage.
Compositionality brings its usual benefits --increased potential to
scale, ability to deal with unknown calling contexts, graceful way to
deal with imprecision -- to shape analysis, for the first time.
The analysis rests on a generalized form of abduction (inference of
explanatory hypotheses) which we call bi-abduction. Bi-abduction
displays abduction as a kind of inverse to the frame problem: it
jointly infers anti-frames (missing portions of state) and frames
(portions of state not touched by an operation), and is the basis of a
new interprocedural analysis algorithm.
We have implemented our analysis algorithm and we report case studies
on smaller programs to evaluate the quality of discovered
specifications, and larger programs (e.g., an entire Linux
distribution) to test scalability and graceful imprecision.
Bounded model checking of hybrid systems Martin Fränzle (University of Oldenburg, Germany)
The analysis of hybrid discrete-continuous systems calls for explicitly
modeling and evaluating the tight interaction of their discrete switching
behavior and their continuous dynamics. Within this lecture, we
concentrate on automatic analysis of hybrid systems through bounded
model checking, explaining symbolic methods manipulating both the
discrete and the continuous state components symbolically by means of
predicative encodings fed to dedicated constraint solvers. We provide a
brief introduction to hybrid discrete-continuous systems and continue to a
set of constraint-based methods for automatically analyzing different
classes of hybrid discrete-continuous dynamics, covering the range from
non-linear discrete-time hybrid systems to probabilistic hybrid systems.
Realisability of message sequence charts Blaise Genest (IPAL/CNRS, Singapore, SGP)
Message Sequence Charts (MSCs) are sequence diagrams to represent
different scenarios of interaction between several processes
communicating asynchronously through messages sent into point to point
channels. The specification language SDL defines them and gives their
semantics through a norm of the International Telecommunication
Union. MSC languages allow keeping a really asynchronous
communication scheme (unlike Petri Nets, Mazurkiewicz traces), at the
expense of technical difficulties when handling message channels. In
order to represent MSC languages in a finite way, the norm defines
graphs of MSCs (MSC-graphs), basically by using finite state
systems. The MSC-graph formalism is interesting practically because it
offers intuitive (sequential) descriptions of otherwise deeply
distributed systems. It is thus theoretically crucial to consider the
realization problem, that is the study of which distributed systems
corresponds to which class of MSC-graphs, and how to automatically
transform one into the other. In order to describe distributed
systems, we will use Communicating Finite State Machines (CFM, also
called communicating automata), another standard of SDL, because they
are really close to distributed implementation.
There is certainly not a unique best answer to this question of
comparing MSC-graphs with CFMs: Different settings fit different
scenarios, and variety of restrictions on control (and in particular
on channels) could be made and variety of equivalences between the two
formalisms could be defined. This technical talk will thus present
several results tackling this question under several settings, which
have been obtained during the 2000 decade. A particularly interesting
technique that we will present is the one to define a fixed
independent alphabet [3,2], in order to lift results from
Mazurkiewicz trace theory [1] to MSCs (and in particular, Zielonka
Theorem).
References
-
Volker Diekert and Grzegorz Rozenberg (editors): The Book of Traces.
World Scientific Publ. Co. (1995).
- Blaise Genest, Dietrich Kuske, Anca Muscholl: A Kleene theorem
and model checking algorithms for existentially bounded communicating
automata. Inf. Comput. 204(6): 920-956 (2006).
-
Dietrich Kuske: Regular sets of infinite message sequence charts. Inf. Comput. 187(1): 80-109 (2003).
Additionally, two very interesting papers overviewing the results of this talk could be found in:
-
Philippe Darondeau: Synthesis and Control of Asynchronous and Distributed Systems. ACSD 2007: 13-22
This survey is much more general, tackling the problem of control and implementation of asynchronous distributed systems.
And
-
Benedikt Bollig, Joost-Pieter Katoen, Carsten Kern, Martin Leucker: Learning Communicating Automata from MSCs. IEEE Transactions on Software Engineering, 2010. To appear.
which, in order to tackle the problem of learning CFMs from MSCs, needs to use (and thus review) the different realization techniques.
Presburger arithmetic and verification of infinite state systems
Jérôme Leroux (LaBRI/CNRS, Bordeaux, France)
The automatic verification of reactive systems is a major field of
research. These systems are usually modeled with variables taking
values in some infinite domains. Popular approaches for analyzing
these models are based on decision procedures adapted to the variables
domains. For integral variables the Presburger arithmetic FO(Z, +,
<) provides a natural logic to express linear constraints between
integral variables. This logic has positive aspects: it is decidable
and actually many solvers implement decision procedures for the full
logic.
Some of these solvers like LASH, LIRA and MONA are based on automata
packages. Intuitively, mappings from words to integer vectors are used
to associate to automata potentially infinite sets of integer vectors
(one vector per accepted word). Usually, the minimal deterministic
automata are proved canonically associated to the represented sets and
not on the way they have been computed. In particular these solvers
are well adapted to sets obtained after long chains of operations like
in symbolic model checking. However, extracting geometrical properties
(for instance linear constraints) from automata is a challenging
problem. From a theoretical point of view, this problem has been
solved in 2005 with a polynomial time algorithm that computes
Presburger formulas from automata representing Presburger sets. The
algorithm first extracts the "necessary" linear constraints
from the automaton. Then, it computes an unquantified Presburger
formula using only these constraints, Boolean operations, translations
by integer vectors, and scaling by integer values. Note that the
algorithm that computes automata from Presburger formulas and the
converse one that produces Presburger formulas from automata provide
together the very first algorithm that normalizes Presburger formulas
into unique canonical forms that only depend on the denoted
sets. Intuitively in this normalization process, the minimization
procedure for automata acts like a simplification procedure for the
Presburger arithmetic.
In this presentation, we recall the classical algorithms that produce
automata from Presburger formulas. We also recall some theoretical
results like the Muchnik criterion that provides a simple way for
deciding if an automaton represents a Presburger formula, the
Cobham/Semenov theorem that characterizes the sets that can be
represented by automata with two multiplicatively independent basis of
decomposition. We also provide some hints on the construction of
Presburger formulas from automata. Finally, we show some experiments
and we provide applications of the Presburger arithmetic on the
verification of infinite state systems.
References
- Jérôme Leroux. A polynomial time Presburger
criterion and synthesis for number decision diagrams. In 20th IEEE
Symposium on Logic in Computer Science (LICS 2005), Proceedings, pages
147-156. IEEE Computer Society, 2005.
-
Jérôme Leroux and Gérald Point. Tapas : The
Talence Presburger Arithmetic Suite. In Tools and Algorithms for the
Construction and Analysis of Systems, 15th International Conference,
TACAS 2009, Proceedings, volume 5505 of Lecture Notes in Computer
Science, pages 182-185. Springer, 2009.
-
Jérôme Leroux and Grégoire Sutre. Flat counter
automata almost every- where! In Automated Technology for Verification
and Analysis, Third International Symposium, ATVA 2005, Proceedings,
volume 3707 of Lecture Notes in Computer Science, pages
489-503. Springer, 2005.
Model checking recursive programs
Stefan Schwoon (LSV, Cachan, France)
Pushdown automata are a well-known formalism from formal-language theory;
essentially, they are finite-state automata extended with a stack.
They provide a convenient formal model for modelling the behaviour of
programs with procedures, where the stack is used to ensure that procedure
calls are dealt with faithfully.
Pushdown systems are an instance of infinite-state systems that are amenable
to efficient verification. For instance, they can be used to verify boolean
abstractions of C programs, a task performed by the Static Driver Verifier
developed by Microsoft. The talk will discuss the theoretical backgrounds
of pushdown model checking, basic verification algorithms, and existing tools.
We will also consider some existing approaches to extend the technique to
multithreaded programs.
|