Re: [isabelle] lexical matters
on 17/1/11 11:14 PM, Makarius <makarius at sketis.net> wrote:
> There are other, more pressing problems.
I think trustworthiness in theorem provers should always be a top priority.
After all, this is why we are using them in the first place. So long as it
doesn't have terrible consequences on other aspects of the theorem prover,
such as efficiency or usability, that is.
> Sitting at a bare-bones ML
> toplevel merely gives you an illusion of "direct" access to a system, but
> there are still many things that can go wrong.
Yes, but I think it's helpful to divide up possible things that can go wrong
into those that are due to the design or implementation of the ML program,
and those that are due to things largely outside the scope of the program
(e.g. bugs in the ML interpreter, OS events interfering with normal
operation, etc, etc). The former is where all the problems arise in
reality, and is very much under within out capability to solve. I may be
wrong, but you're presumably talking about the latter?
> An LCF-style prover is reasonably correct by construction, independently
> of concrete syntax input and output. Errors in the internalization/
> externalization do not multiply as quickly as genuine logical errors.
Yes, I agree that the LCF style largely solves problems of trustworthiness,
and that the sort of problems it solves are the most serious. But given
that this has been solved (a long time ago), shouldn't we tackle the
This archive was generated by a fusion of
Pipermail (Mailman edition) and