Re: [isabelle] Need a "What's the foundation of Theory.thy" utility


> I think it's merely that you don't support Isabelle/HOL.

I do intend to do this, but it requires significant investment.

> What I'm primarily talking about is what a language can declare; I'm not
> concerned with whether the engine that implements the language has
> integrity, or can be exploited through security holes. What I'm
> requesting is simple. Given two languages, Isabelle/ML and
> Isabelle/Isar, and two theories, Theory1.thy and Theory2.thy, report to
> me what statements in ML and Isar are made in the theories that are
> axiomatic, and anything else that someone decides to label "the
> foundation", and tell me what is axiomatic in Theory2.thy that goes
> beyond Theory1.thy.

What a basic proof auditing capability would do is support the auditor in
confirming that the theorem does indeed get proved and that it is indeed the
desired theorem.  This requires being able to query the theory context, to
understand the definitions used in the statement of the theorem and check
that there is no inconsistency.  It also requires being able to query the
concrete syntax settings, again to understand the statement of the printed
theorem and the printed theory context.  And it also requires a system that
can be trusted to perform sound deductions, correctly report theory context
and faithfully and unambiguously print out theorems and theory context.
Note that it is not necessary for these criteria to be satisfied by the
system used to create the original proof if proof recording can be used to
port the proof to a trusted target system (e.g. HOL Zero).

Being able to return foundation details about the original proof script
(e.g. "the axiom used on line 4,500") is nice but not part of the basic
capability that I am in the process of creating.

> At the easiest level, [by "axiomatic" I mean] keywords in Isar that are
> axiomatic, two of those keywords being "arities" and "axiomatization".

For these purposes, I think you shouldn't be thinking at the proof script
level (i.e. Isar), but at the level of logic (i.e. Isabelle/HOL).

> If someone puts in an axiom that will give them the Riemann Hypothesis,
> which isn't the Riemann Hypothesis, how is the software supposed to
> know that it's nonsense, unless it makes the logic inconsistent?

Exactly, so there's a big human element to this auditing process.  And even
determining whether or not it's inconsistent is not something that can be
done automatically in general.

> The nature of logic is I'm free to start with whatever axioms I want.
> The main thing is to clearly know what the foundation is, not to have
> someone tell you that it's nonsense. What I'm asking for is a 1 day or a
> 1 week job. If I ask for more, no one has the time to do it.

That's the sort of thing I'm trying to support.

> The idea is that Trusted Person has released Hol_Extension.thy and it
> has become a standard extension of HOL, but it's not part of the
> Isabelle2013 distribution, and people are distributing it with their own
> theories. I give the utility a MD5 checksum that Hol_Extension.thy is
> supposed to match when a checksum is run on the file. If it doesn't
> match, then the person is not distributing the right file, and maybe
> they edited it.

I see.  It would be nice to know that a given load of foundation is indeed
the "standard" foundation.  I shoud really think about how to support this
kind of thing.

> You would have to tell me what proof auditing is, but I don't classify
> it as proof auditing. "Proof auditing" implies that proofs need to be
> checked because there's the possibility that the underlying software is
> being exploited or has bugs.

I explain proof auditing above.  But yes, it includes checking the proofs,
although this does not necessarily require examination of the proof script
(for example if you are using a proof porting system).

> I use the phrase "or can't be determined to be axiomatic," but I'm only
> speculating that a person might need to look at a block of ML to
> determine whether it declares something axiomatic. The utility would
> report on ML that's used which needs to be inspected by a human, and a
> person who knows enough could decisively determine whether the ML is
> axiomatic. That's what I was talking about, and just knowing about
> anything that's axiomatic.

In the capability I'm building I'm trying to completely get away from the
need to examine the proof script, because this is ML (or Isar) that can be
difficult to decypher in terms of fundamentals.  I call this "black-box
proof auditing", when you just need to examine the system state to fully
audit a proof, rather than also examine the input used to get it into this

> If I state a axiom that gives me a proof of the Riemann Conjecture,
> there's nothing that needs to audited. The proof is correct, given the
> axiom. Logically, there's no problem with the proof.

Not with what I envisage.  The auditor would definitely fail anything that
assumes something as contentious as the Riemann Conjecture.

> It's worth repeating, that I'm all for you giving me HOL Zero to verify
> Isabelle/HOL, and lots of other people would want that too. But it seems
> at this point, that's years away from happening. They'd have to hire
> you, and you'd have to go to Germany or England to get it done any time
> soon.

Yep, I would need funding to get the capability working for Isabelle/HOL.


This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.