Re: [isabelle] feature request: defining tuples

On Thu, 28 Nov 2013, John Wickerson wrote:

1. Write { definition "foo == %(a,b). blah /\ bleh" }. But this changes
   the foo_def theorem, and doesn't look so visually nice.

2. Write { fun foo where "foo (a,b) = (blah /\ bleh)" }. But this
   changes the foo_def theorem, and I have to remember to use foo.simps
   instead. And I have to put extra parentheses around the definiens,
   because "=" has different precedence to "==". And I have to remove
   foo.simps from the simplifier if I don't want foo expanding all the
   time. And I prefer to reserve "fun" for proper recursive functions,
   which foo isn't.

3. Write { definition "foo a b == blah /\ bleh" }. But there are times
   when it's more idiomatic to tuple than to curry.

From these usage patterns, the mental model of the essense of 'definition'
vs. 'fun' / 'function' is not fully clear.

The 'definition' package is for "simple definitions". It wraps up the basic Local_Theory.define concept, which in turn goes releatively straight to primitive definitions of the logical core (which are not accessible to users these days). It presents itself in an object-logic friendly manner, to allow writing "f x y = rhs" (with HOL equality). The Pure form "f x y == rhs" or even "f == %x y. rhs" is mostly historic -- it is only required for object-logics like ZF that cannot internalize all forms of definition. The conversion of the user specification is represented as higher-order rewrite system via 'defn' rules -- whatever works here works, what doesn't work doesn't.

The 'function' package is somehow dual: it incorporates as many features as possible, just before gravitational collapse. Pattern matching probably even outweighs the recursion / termination aspect. (Interestingly, Scala is a higher-order functional language, where recursion and pattern matching are separate. Odersky then uses his remaining complexity budget elsewhere to make it more sophisticated than ML or Haskell.)

The practical situation can be improved in a simple manner like this:

  * Refrain from using == in definitions.  Uniform = simplifies theories
    and allows to move specifications between different tools and packages
    more easily: 'theorem', 'definition', 'primrec', 'fun', 'function' etc.

  * The function package could refrain from exposing its internal
    construction of f as f_def.  This was technically not possible when
    first implemented, but is now just a matter how Local_Theory.define is
    invoked.  This also avoids well-known confusion due to unintended
    "unfold f_def" seen routinely with fresh users.

Then the difference is just implicit simplification or not, and you write something like:

  fun f where [simp del]: "f (x, y, z) ((a, b), (u, v, w)) = rhs"

and make patterns as complex as the function package allows. Later you simplify with f.simps as usual.

A side-alley of this thread is the old question, what the "unfold" proof method or 'unfolding' command should really do. There are various well-known anomalies in its wording and technical snags of "unfolding f_def" where the equation does not always apply (because patterns don't match).

So far I always considered a puristic approach more desirable (at least in theory): reconstruct a precise equation "f == %x y z. rhs" from the user-space equation f x y z = rhs. Really *unfold* that, and don't rewrite with arbitray rules. Any such change and clarification is apt to break existing applications, as usual.

An alternative speculation is this: "unfold f" takes a term f and looks up its equational "Spec_Rules", which is a new concept from some years ago. It then just rewrites with them by simplification, like the present f_def or f.simps would do. This unifies 'definition' and 'fun' in that respect, and makes the system appear more like Coq, where terms have canonical "computation rules".


This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.