[isabelle] ML in LCF architecture
On Wed, 19 Jan 2011, mark at proof-technologies.com wrote:
on 19/1/11 4:02 AM, Gerwin Klein <gerwin.klein at nicta.com.au> wrote:
Are we to ban non-trivial ML in the process of producing a proof?
This reduces what can be done in the project.
If you're really concerned about malicious users, why not? Usual
Isabelle interaction makes no use of ML. Very few of the AFP entries,
many of them substantial, do anything remotely like that.
Yes, but I've been involved in large safety-critical projects where use
of ML has been a practical necessity. Formal methods is generally
currently extremely expensive if it involves formal proof (but not in
the projects where I have been involved with).
The contracted, safety-critical proof work I have been involved with
would certainly have been completely infeasible without writing large
amounts of ML program to support my proof work.
In Isabelle? HOL4 or HOL-light, sure, that's their main interaction
mode, but Isabelle hasn't had the ML-as-user-interface paradigm for
quite a few years.
Yes, so Isabelle would be safer in this respect because the proof
scripts aren't even in ML. But to enable cost-effective large formal
verification projects involving formal proof, I think allowing the
contractor to use the power of ML to overcome practical problems,
specific to the particular project, would be an extremely useful thing.
I'm talking about bespoke automation and bespoke semi-automation. Now
it's always possible to do a bit of bespoke pre-processing and still use
a restricted, non-ML paradigm, but this will often have its own
integrity risks and is in any case is fundamentally limited. We need to
give contractors the full power of ML to get the costs of full formal
verification down. But we shouldn't trust them not to be malicious!
Here I am sensing some issues that are specific to the classic HOL family,
not Isabelle nor LCF itself. Looking at the sources of good old Cambridge
LCF (from Larry's website), you have a LISP engine that implements an ML
interpreter that is merely a symbolic computation engine, so ML proof
scripts cannot affect the system integrity itself.
Later HOL implementations have done this differently, by starting with an
off-the-shelf ML system (SML or OCaml) and implementing the prover such
that it shares the ML environment with user tools and proof scripts. This
has opened a few potential problems of trustworthiness, although I would
call nothing of this really critical.
In Isabelle we have moved more and more away from that raw exposure of ML,
not just by avoiding ML in regular proof scripts. Isabelle/ML is embedded
into a managed context of the Isar infrastructure. There where numerous
practical demands that motivated that, such as robust undo history and
support for parallel execution.
Taking this approach of to the extreme, one could easily make Isabelle/ML
a sandboxed environment for purely symbolic computations, without any
access to system functions of Standard ML, or the Isabelle/Pure
implementation itself. As mentioned before, Poly/ML provides robust means
to invoke ML at runtime, such that user code cannot mess up your system
Funnily, when Mark encountered the Isabelle tty for the first time some
months ago, his first impulse was to drop out of the managed Isar
environment, in order to have raw access to the underlying ML :-)
BTW, this no longer works in the Isabelle/Scala Prover IDE protocol, but
for other reasons than anxiety about vulnerability of the prover kernel.
This archive was generated by a fusion of
Pipermail (Mailman edition) and