Re: [isabelle] Isabelle Foundation & Certification



I feel like I should give a real example of what I think is consensus,
and why the current state in Isabelle/HOL is not satisfactory:

If I manage to prove in Isabelle, e.g.,

  theorem fermats_theorem: 
    "ân::nat>2. Â(âa::nat>0. âb::nat>0. âc::nat>0. a^n + b^n = c^n)"

then, to believe that I really proved Fermat, one only has to check that
the definitions of the natural number library really match the expected
operations on naturals, and that my statement of the theorem really
matches Fermat's theorem.

However, in current Isabelle, one has to check every single definition,
even the unrelated ones, for consistency, to rule out, e.g., the
following proof. Note that this proof, assuming Fermat really holds,
does not even introduce inconsistency into the logic ... it's just a
hidden axiomatization of Fermat:

  (* Hide and obfuscate this stuff well, such that it is not too
     easy to find for a human, but ensure that ATPs still find it
  *)
  definition "P â 
      ân::nat>2. Â(âa::nat>0. âb::nat>0. âc::nat>0. a^n + b^n = c^n)"

  consts c :: bool
  typedef T = "{True,c}" by blast

  lemma L: "(â(x::T) y. x = y) â c"
    using Rep_T Rep_T_inject Abs_T_inject by (cases c) force+

  defs c_def: "c â if (âx::T. ây. x=y) then P else ÂP"


  (* Place this main statement prominently in your theories, and hail
    sledgehammer for being really powerful *)
  theorem fermats_theorem: 
    "ân::nat>2. Â(âa::nat>0. âb::nat>0. âc::nat>0. a^n + b^n = c^n)"
    using L P_def c_def by presburger (* This proof was found by
sledgehammer! *)

So, with sledgehammer becoming more powerful, and having the possibility
of making inconsistent definitions, it's only a matter of time when
sledgehammer finds some nice proof for you, which exploits, in some
non-obvious ways, the inconsistency of completely unrelated definitions.

--
  Peter



On Do, 2015-09-17 at 01:14 +0200, Jasmin Blanchette wrote:
> Dear Larry, Andrei,
> 
> > Your position here puzzles me now as much as it did the last time we talked about this. Let's forget about Isabelle/HOL for a second, and think of a logic L with axioms and deduction rules, but no definitions. 
> > Further, assume that L is known, or strongly believed to be consistent, in that it does not prove False. Now consider L_D, the logic L augmented with definitional mechanisms. This augmented logic should of course not prove False either! Writing meaningful definitions is the user's responsibility, but having the definitions consistent is the logic L_D's responsibility. Guaranteed consistency distinguishes definitions from arbitrary new axioms -- I learned this years ago from your papers and books.
> 
> I can only second this. After reading books like the Isabelle tutorial, which has Larry as a coauthor, I developed a certain understanding for what "definitional" and "foundational" means, and was for many years under the impression that there was a strong consensus in the proof assistant communities. In this context, I find Larry's comments rather puzzling. In fact, I agree with almost every single sentence he wrote, but
> 
>     To my mind, a soundness bug is when a valid expression or proof state is transformed into something wrong.
> 
> violates the very notion of "definitional". At some point, we will have to make up our minds as to whether our definitions are definitions or just arbitrary axioms (and whether "typedef"s count as definition).
> 
> Mark's comments, which I just read, also neatly summarizes what I thought until recently was a consensus also shared by Larry.
> 
> Jasmin
> 
> 






This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.