Re: [isabelle] Documentation of ML sources



On Tue, 23 Oct 2012, Lukas Bulwahn wrote:

The Isabelle developer tutorial provides an simple access to programming in Isabelle with various examples, and a rather simplified view on some topic. It is helpful for starters and beginners, but also only scratches the surface of some functionalities.

I envisaged the Quickcheck tool for Isabelle's ML (presented last week in Munich) as some further project to address documentation of the sources. In the short term, I wanted that specifications would allow us to document oddities in the system by grading surprising specifications of functions, which could then be addressed at any point in the future if we consider the surprise severe enough to change.

In the long term, I was thinking that users and developers could discuss their expectations about functions in this formal setting of properties or contracts, and the Quickcheck tool would motivate using specifications when implementing, and a run-time monitoring tool for specifications would ease changing code in the maintenance process.

The funny thing is that I don't think myself in these categories when reading or (re)writing sources. In the past 20 years, I've always felt myself this discrepancy of what is being taught in class, and what you do in practice to get very sophisticated systems working and keep them running. What you teach in class has also change many times in that timespan, and there is no indication that the last word has been said now.

I also want to point generically to several talks by Alan Kay on the web (e.g. the one called "Programming and Scaling"), where he makes a critical reflection on his past decades of shaping the programming language community. He is now at 72 and has grown quite wise. In this context, it would be an interesting research project to investigate what this received Isabelle development actually is, i.e. turn things from the head on their feet. In any case, I shall probably add some thoughtful Alan Kay citations to the Isabelle/Isar implementation manual.


NB: Brian agreed with the idea of contracts, as he was pushing for more fine-grained types. His ideas were much more intrusive changing the implementation, whereas specifications/contracts would only add some fine-grained information in other files.

That's an old topic. One needs to be more specific about it. Historically, we've had situations where more detailed types have improved the situation (say type Graph.T module instead of former ad-hoc functions on lists of pairs), and sometimes types make things worse (e.g. every tool having its own "simpset"-style thing, instead of using the universal untyped Proof.context in the proper way).


In my opinion, there is very much documentation for Isabelle's ML sources. As far as I see it, there are two further opportunities for improvements in a very long-term range: - Provide ways to cross-link the various documentation sources (and user interfaces to get all relevant documents)

The main reference manuals (isar-ref, system, implementation) are already formally marked-up in this respect. It is merely a matter of improving the browser facilities to link it up with the Prover IDE, say. So looking at some ML snipped formally within the editor, one would get annotations / links pointing to occurrences in the massive manuals, for further reading. I've seen this work for Java libraries in Java IDEs, so it is not rocked-science.


- Provide more fine-grained descriptions (e.g., provide specs that can be checked with the Quickcheck tool) and investigate if this simplifies or hinders our development process.

Without being specific about it, my spontaneous reaction is "hinder". There need to be convincing applications first: How would it improve the actual problems that we occasionally have: wrong context used in some tool, wrong use of "free" vs. "fixed" variables in the context, non-linear use of theory/local_theory.


Anyway, comparing our code base with that of Java and its standard libraries, we are in a surprisingly good situation --- or Java is surprisingly bad. Scala + library is better, but also faces problems we could never dream of in our pure ML world.


	Makarius





This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.