Re: [isabelle] large system verification
> (1) Develop (the core of) the system in the proof assistant and
> extract code to a programming language.
> (2) Develop the system in the programming language directly and
> extract a model to be verified in the proof assistant.
> My 5 cents here are that, while (1) can be convenient for getting
> things done reliably, (2) is more realistic regarding the prospect
> of collaborating with developers without verification expertise, and
> ultimately with the industry.
I think you are right. However, I'd like to add that it is very
difficult (if not practically impossible) to verify functional
properties of large systems that were developed *without* verification
as a goal from the start. The reason is that the verifier has to guess
what the programmers were thinking. The resulting reverse-engineered
invariants tend to become messier than invariants for clean
verification-friendly programs. Messier invariants cause pain in
There is a danger with approach (2): seemingly small changes to the
source code can have a significant impact on the carefully developed
proofs. Proofs that once worked can easily get far out of step with
the continuously developed source code.
As far as I know, the most noteworthy example in the second category
(2) is the OS verification done at NICTA. The OS that they verified
was developed with verification in mind from the start, see:
Another successful application of approach (2) is the hardware
verification that's done at Centaur Technology using ACL2, see:
I would also like to add that it is possible and useful to mix (1) and
(2), e.g. use performance-tuned code through method (2) in a context
that in general uses method (1). I've done this in various projects,
see http://www.cl.cam.ac.uk/~mom22/stacks.html for pointers.
On 29 June 2015 at 13:44, Andrei Popescu <a.popescu at mdx.ac.uk> wrote:
> Dear (large) system verification experts,
> I am surveying the literature on this topic and I identified two approaches to verifying software systems in a proof assistant with acquiring solid adequacy guarantees:
> (1) Develop (the core of) the system in the proof assistant and extract code to a programming language.
> (2) Develop the system in the programming language directly and extract a model to be verified in the proof assistant.
> My 5 cents here are that, while (1) can be convenient for getting things done reliably, (2) is more realistic regarding the prospect of collaborating with developers without verification expertise, and ultimately with the industry. Or can we hope that in the near future "the industry" will embrace the initial development of systems in a proof assistant?
> Of course, there are nuances here, e.g., is one aiming at verifying a large monolithic system or an open-ended class of systems (the latter being closer to the goal of a program verification framework)?
> I know there have been successful projects in both categories and I would be interested to hear opinions and examples concerning these two approaches.
> (This question is related to the thread "proof engineering for program verification", but has a slightly different scope.)
> Many thanks in advance,
> Please note that Middlesex University's preferred way of receiving all correspondence is via email in line with our Environmental Policy. All incoming post to Middlesex University is opened and scanned by our digital document handler, CDS, and then emailed to the recipient.
> If you do not want your correspondence to Middlesex University processed in this way please email the recipient directly. Parcels, couriered items and recorded delivery items will not be opened or scanned by CDS. There are items which are "exceptions" which will be opened by CDS but will not be scanned a full list of these can be obtained by contacting the University.
This archive was generated by a fusion of
Pipermail (Mailman edition) and