Re: [isabelle] lexical matters
On 19/01/2011, at 1:50 PM, <mark at proof-technologies.com> <mark at proof-technologies.com> wrote:
> on 18/1/11 10:22 PM, Gerwin Klein <gerwin.klein at nicta.com.au> wrote:
>>> We should remember that at the moment it is very easy for someone to
>>> maliciously produce what appears to be a formal proof of some statement
>> I don't think this is a problem in practice. Theorem provers are already
>> used in big certification projects. The evaluators in such projects
>> know what they are doing,
> How can you be so confident about this?
Talking to some of them. I'm sure this is not the case for all evaluators, but there are people in that business that have used Isabelle for longer than most of the current developers.
> In the large, safety-critical
> certification projects I have been involved with, the evaluators do the best
> they can with the tools they have available. Proof checkers with readable
> and highly trustworthy output is a luxury not available to them.
Depends on the project, of course, and what level of certification. Currently not that many certifications with interactive formal proof exist, but some do (ACL2 has been used quite a bit). As you say, evaluators need to take the best that is available. LCF provers are a long way ahead of what they usually have to deal with. That's why I don't think it's a problem in practice.
>> at least up to the level where they would easily
>> be able to spot attempts to make syntactically misleading proof
>> It's easy enough to scan for ML blocks or similar in theory files, spot
>> duplicate identifiers, etc, and demand good explanations for their
>> (or just forbid them). And the threat of just using a proof checker is
>> always there, so why try to cheat on syntax?
> The input in this process is the full power of an ML program. We all know
> that it is possible to hide incredibly subtle things in huge programs. Are
> we to ban non-trivial ML in the process of producing a proof?
If you're really concerned about malicious users, why not? Usual Isabelle interaction makes no use of ML. Very few of the AFP entries, many of them substantial, do anything remotely like that.
> This would be
> forcing contractors to do their work with one hand tied behind their back.
> The contracted, safety-critical proof work I have been involved with would
> certainly have been completely infeasible without writing large amounts of
> ML program to support my proof work.
In Isabelle? HOL4 or HOL-light, sure, that's their main interaction mode, but Isabelle hasn't had the ML-as-user-interface paradigm for quite a few years.
I'm not saying that you'll never want to use any ML for bigger advanced projects, but these ML bits are a far way off from complex ML programs. Compared to the effort that should go into validating models and statements, checking small amounts of proof method ML code for pretty printing cheat attempts would be trivial (there's no reason to have any pretty printing code there at all). There's no reason evaluators should have to accept huge amounts of complex ML code.
> It is far better for the auditor if they can treat the ML code that
> establishes the final theorem as some sort of black box. To do this the
> theorem prover needs certain basic properties, including a trustworthy
Well, there is one, you just don't seem to like it: inspecting the term on the ML level. As Michael says, it's as least as good as Lisp s-expressions.
There's a difference between accidentally creating something misleading (reasonably hard to do if you use the normal interface) and a malicious user with motivation and criminal energy. If you want to guard against the latter, low-tech plain terms and external proof checking surely brings more assurance than further infrastructure.
>> I'm with Larry on this: the much bigger problem is to convince yourself
>> the model and final theorem are in any way useful or right. This is where
>> cheating will happen much earlier.
> I did not say that this was not a problem! This is the big problem!
I guess we're all mostly of the same opinion fundamentally anyway, just different views on where to put resources and emphasis.
> this is the problem that I am advocating needs a tool that properly supports
> the process - the process of determining that the model and final theorem
> are right (as well as that the final theorem has actually been proved).
> This process involves using the pretty printer.
Checking that the proof shows what you think it does involves the pretty printer only at a very shallow level. It's not the case that we can't trust the Isabelle pretty printer to support development and that it will magically show us wrong terms.
I've often seen and written theorems that didn't mean what I thought they meant. It's so far never been the fault of the pretty printer. It can be, because you'd be able to trick it with some effort, but getting rid of that possibility is about assurance, not about finding the right proof statement. It can have different solutions.
This archive was generated by a fusion of
Pipermail (Mailman edition) and