Re: [isabelle] Isabelle Foundation & Certification
What we usually do to check a proof development is run âIsabelle buildâ and look at the output. Of course, Isabelle (and any system based on the LCF paradigm) allows arbitrary code execution. This should be borne in mind in connection with any idea that the use of a proof assistant eliminates the need to trust the person making the proof. None of us are developing software that has the remotest claim to satisfying any sort of security requirement.
> On 21 Sep 2015, at 15:02, Tjark Weber <tjark.weber at it.uu.se> wrote:
> On Sat, 2015-09-19 at 11:38 +0200, Tjark Weber wrote:
>> But if someone gives you an Isabelle theory and claims that it proves,
>> e.g, FLT, there is no machine that could decide this claim for you.
> To elaborate, and to answer questions that I received via private
> email: The main difficulty here is that Isabelle theories are written
> in a rich and powerful language (that may contain, e.g., embedded ML
> code), to the point that the observable behavior of Isabelle when it
> checks (or more accurately, processes) the theory cannot be trusted.
> Of course, other theorem provers with powerful proof languages also
> suffer from this problem.
This archive was generated by a fusion of
Pipermail (Mailman edition) and