Chapter
summaries
Part I
Chapter 1
Building a
Simple Sequential Process
We introduce the syntax that CSP
uses to create basic sequential
processes: prefixing, recursion and conditional, external and
internal choice. The ideas and syntax of events and channels is
introduced. We learn the distinction between deterministic and
nondeterministic choice. Examples include buffer and counter processes,
and
processes that describe a human's life in terms of constituent events.
We see the ideas of traces and trace refinement. The FDR tool is
introduced, as is its input language CSPM.
Chapter 2
Understanding
CSP
This introduces the three types
of formalism that this book uses to
study CSP. The first of these is algebra: we give and explain
many examples of algebraic laws that CSP operators satisfy as well as
comparing these to familiar laws from arithmetic and set theory.
Denotational semantics is based on behavioural models such as
traces: we introduce rules for computing traces and fixed points, as
well as the Unique Fixed Point (UFP) principle for guarded recursions.
Operational semantics are motivated by LTSs and the operation of FDR.
We discuss trace refinement in depth and understand how to use it in
formulating and running specifications for FDR.
Chapter 3
Parallel
Operators
We meet the main parallel
operators of CSP: synchronous, alphabetised,
interleaving and generalised parallel. We see how processes can
influence each other and transfer data along channels by synchronising
on all or some of their
events. Examples include people trying to agree on joint lives,
bargaining in a shop, and the five dining philosophers who can deadlock
through contention for resources. All of these are implemented in the
accompanying example files, and we explain how these new operators
affect FDR.
Chapter 4
Case Studies
This chapter introduces a number
of case studies that we will meet many
times in the rest of the book. The first of these shows you how you can
code Sudoku puzzles in CSP and use FDR to solve them. The second shows
how designing a routing
network can easily lead to deadlock but how well designed
networks can work reliably: we offer a number of alternatives for this
in trees, general networks, rings and cartesian grids. The final
example describes ways of overcoming an erroneous communication medium.
As well as introducing the Alternating Bit Protocol and other ways of
gaining reliability, we also introduce techniques for modelling
error-prone systems in CSP.
Chapter 5
Hiding and
Renaming
We meet some new operators.
Hiding (P\X) allows you to conceal the
internal actions of a process from the outside world. Renaming (P[[R]])
gives a flexible way of changing the labels on the events that a
process communicates into different labels seen by the outside world.
We see how hiding can create more natural models of parallel systems
but can create a few theoretical difficulties, and how renaming
combined with parallelism can create seemingly magical effects such as
allowing Adam and Eve to achieve a variable renaming so each can have
their own choice of apple. We introduce link parallel: a combination of
parallel, hiding and renaming that makes constructing practical
networks comparatively simple, and see how it can be used in building
some natural dynamic networks such as one for implementing mergesort.
We find that renaming can help FDR solve Sudoku puzzles faster.
Chapter 6
Beyond Traces
In previous chapters we have
already discovered that traces give an
incomplete picture of how processes behave, for example by failing to
distinguish deterministic from nondeterministic behaviour and failing
to capture deadlock properly. In this chapter we introduce the ideas of
failures and divergences, which allow us to develop models that do
capture these phenomena accurately. We see how these models allow to
formulate and verify richer specifications on FDR. The phenomenon of
divergence is described and we show one way to ensure it does not
happen. We introduce lazy abstraction as a variant on hiding and show
how it can be applied to capturing fault tolerance and computer
security.
Chapter 7
Further
Operators
This chapter completes the CSP
language by introducing three operators
that allow one process to hand control on to another. In sequential
composition P;Q this happens when the first process terminates
successfully via a special action. In interrupt P/\Q
this happens when the environment communicates an initial action of Q,
and in throw P[|A|>Q any occurrence of an action in A within P counts as an exception
which hands control to Q. We study
in particular the relationship between termination and parallel
composition, and the effect of termination on the semantic models seen
in earlier chapters.
Chapter 8
Using FDR
This chapter describes FDR and
how to use this tool effectively. We
describe how to interact with the FDR user interface, and how to
program in CSPM and its functional sub-language which is roughly
equivalent to Haskell. We see what FDR can and cannot do, and how to
play to its strengths. For example we see -- in an example based on
finding a Hamiltonian path -- how it is better to code complex systems as parallel compositions
rather than as large sequential
processes. We show in outline how FDR actually carries out a
refinement check, and how this algorithm can be adapted to check
determinism. We discuss the relative advantages of depth-first and
breadth-first search, and finally show how compression operators that
FDR supplies can enable one to (at least partially) overcome the state explosion problem
that limits the size of systems
the tool can handle.
Part II
Chapter 9
Operational
Semantics
This is an in-depth study of the
operational semantics of CSP and of
the transition systems these are based on. We study the difference
between finitely and infinitely branching transition systems, and
between ordinary LTSs and ones where there may be acceptance or
divergence information in additional labels on states. We show how CSP
can be given an operational semantics in either the traditional
Structured Operational Semantics (SOS) style or in a less flexible
Combinator style that captures the spirit of CSP. We show what any
operator with a combinator operational semantics can be expressed in
CSP. Combinators lead to Supercombinators, the technique that FDR uses
to implement transition systems effectively. Finally we show how formal
"observations" of transition systems allow us to deduce what a
process's traces, failures and
divergences are.
Chapter 10
Denotational
Semantics and Behavioural
Models
This chapter studies
denotational semantics in more depth than in the
introductory chapters. We discuss the nature of behavioural models and
what is required of them. We see how CSP definitions of operators
naturally lead to distributive operators, and discuss topics such as
full abstraction and congruence with operational semantics. We look in
detail at each of the traces, stable failures and failures divergences
model, and understand their healthiness conditions. We are able to
specify determinism formally and study its relationship with the related
concept of confluence.
Chapter 11
Finite
Behaviour Models
There are two dimensions to
study when we try to understand what
behavioural models for CSP exist: what finitely observable things to
record and what infinitely observable things to add to these. In this
chapter we study what finite observation models exist. We examine the
long-understood acceptances or ready-sets and refusal testing models.
We discover two newer models: the acceptance-traces model of all finite
linear observations, which is the finest possible model in this
category, and the revivals model which sits above traces and stable failures, and below all
other models. The structural result
that proves this last fact puts ideas such as full abstraction in a
completely new light. For each model we see what sorts of specification
are best cast in
terms of it.
Chapter 12
Infinite
Behaviour Models
Here we study how infinite
behaviours can be added to these models. We
get three classes of models in addition to the finite observation ones.
The first of these, adding only divergence, can cope with finitely
nondeterministic CSP with divergence strictness. The second, in which
infinite traces and similar behaviours are added, can handle infinitely
nondeterministic but divergence strict CSP. In the final class
divergence strictness, or at least most
of it, is removed. The resulting models are very natural except for
requiring an exotic fixed point theory. We are able to get similar
structural results to those in Chapter 11, for the divergence-strict
models.
Chapter 13
The Algebra of CSP
We introduce a formal algebraic
semantics for CSP. First we show how
algebraic laws can systematically reduce all finite CSP terms to a head
normal form, and therefore create an Algebraic Operational Semantics.
Then we show how to reduce this to a normal form for the finest CSP
model as developed in Chapter 12. We can then show how a small
selection of optional laws such as P=P [] P allow this normal form to be transformed
into normal forms for any of the
main models of CSP. This provides an elegant explanation for why the
hierarchy of models developed in the previous two chapters is as it is.
Part
III
Chapter 14
Timed Systems
1: tock-Time
This chapter develops the tock-CSP
model
of CSP introduced in Chapter 14 of TPC. We see how incorporating
time into CSP models necessarily changes our understanding of the
processes represented in this dialect. We develop a major new
case study: the Bully algorithm for leadership election, and explain
the use of the tau-priority model that FDR uses to support timed
analysis, implementing the principle of maximal progress. We
create timed versions of some of the routing algorithms described in
Section 4.2 and compare their timedperformance.
Chapter 15
Timed Systems
2: Discrete Timed CSP
Here we look at the other CSP
view of timed systems, namely Timed CSP
in which time appears implicitly rather than as an explicit tock. We
concentrate on the discrete variant of this, based on a semantic model
involving tock to record the passage of time, and show how it can be
implemented in a variant of tock-CSP.
We
use the alternating bit protocol as an example, and show how it
relates to the more traditional continuous timed CSP through Ouaknine's
theory of digitisation.
Chapter16
More about FDR
This chapter covers a number of
more advanced topics in the
implementation and use of FDR. The first of these are in-depth
studies of normalisation and the other compression operators that FDR
uses, the latter including types of bisimulation and the CSP-specific
diamond operator. We then look at ideas for partial order reduction and
"lazy compression" including the chase operator. We look at
options for using parallel execution and improved memory usage in FDR.
Finally we look at ways of making FDR handle a wider range of
specifications, including ones that are not refinement-closed an
distributive, certain types of infinite-state specifications, and LTL.
Chapter 17
State
Explosion and Parameterised
Verification
Here we look in depth at the
state explosion problem and the related
parameterised verification problem: how to prove a property of a
general class of processes or networks. We look at induction and data
independence separately.and combine them into data independent
induction, which can handle many networks that neither parent method
can. These include networks based on one of the routing examples from
Chapter 4, which are proved correct in general. We tudy the topic of
buffer tolerance: when can we prove a property of a system
with buffered channels by
studying the corresponding unbuffered
network, typically with a much smaller state space. This is closely
related to the data-flow model of concurrency. Finally, we study how
Counter-Example Guided Abstraction
Refinement (CEGAR) can be integrated naturally into FDR
Part IV
Chapter 18
Shared-Variable
Programs
This chapter shows how one can
write a compiler in CSPM and base
a tool around the result. In this case that compiler is for shared
variable programs, so this chapter is also an introduction to that
subject. We study this through examples such as mutual exclusion,
including Lamport's bakery algorithm, and the dining philosophers. The
compiler is based on programs described in a CSP data type, which
is generated and interpreted by the front end of the tool, which we
call SVA. SVA, a front end for FDR, is available from this book's web
site.
Chapter 19
Understanding
Shared Variable
Concurrency
Here we look at shared variables
in depth, including a study of dirty
variabless via the bakery algorithm and Simpson's 4-slot algorithm
(where we propose a version that avoids known difficulties arising from
dirty flag variables). We also define what it means for one shared
variable program to refine another one,and introduce various ideas
necessary to create finite model-checked proofs of programs (such as the bakery
algorithm) that involve unbounded integer
types.
Chapter 20
Priority and
Mobility
We look at two ideas which are
not normally thought of as within the
scope of CSP, namely priority and mobility. These are motivated by a
CSP study of finding a (chess) knight's tour. We introduce a priority
operator that is consistent with some of the standard models of CSP. As
examples of priority we study Statecharts via a burglar alarm case
study, and a simplified form of these, namely two-phase
synchronous automata. We contemplate how to add mobility into CSP's
alphabet-based model of interaction and make a few tentative proposals,
including a "closed-world" form of mobile parallel composition. This is
illustrated via a model of a simple telephone network, which is also simulated in standard
CSPM.