Re: [isabelle] lexical matters

Once you assume malicious users, you are turning theorem-proving software into security software. The time for this may come, but it will require a significant investment in security engineering. The first step would be to develop a realistic threat model. And this isn't really possible in the absence of actual attacks. It will be interesting to see if there are any over the coming years.


On 18 Jan 2011, at 14:23, Makarius wrote:

> Yes, depending how much criminal energy you want to invest.
> Actually, this my favourite theoretical attack on LCF prover integrity. After "The Matrix" it also has a name, see
> We could harden Isabelle/ML against this, since the ML environment is under our control.  Since this has no practical relevance, though, it has not been done so far.  It is important to keep focused on real problems.

This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.