Re: [isabelle] [Coq-Club] Large bodies of knowledge

Thanks for the references, Josef. I will read them with interest.

It may be worth stating, though, that the project I wrote about is more type theory than expert system and that my questions probably pertain mostly to mathematical-reasoning efforts.


On 2/3/14, 7:17 PM, Josef Urban wrote:
This seems close to what has been tried in Project Halo. There was a
talk about automated reasoning challenges over the textbook "Campbell
Biology" last year at the CADE KInAR workshop: . There is of
course also the recent $1B IBM is pumping into Watson:


On Mon, Feb 3, 2014 at 5:09 AM, Rene Vestergaard <vester at> wrote:
I will shortly be attempting to reach a, for us, non-standard audience with a project that includes the verification of the complete reasoning in a molecular-biology monograph.

The primary sales argument is by proxy: there's a Curry-Howard component to the project that allows us to solve an open problem in the application domain.

The primary scientific argument concerns formal reasoning, including the value of formalising/verifying a large body of knowledge, be it a textbook, a monograph, a "big" result, or similar.

I have my own personal arguments for why large applications are a good idea:
- tests of and guidance for maturity/naturalness/expressivity of tools and methodologies,
- development of libraries/momentum/etc.,
- teaching/industrial/etc. purposes,
- retrodiction and, of course,
- assuring large case splits and proof by reflection.

I was hoping people would help me with
1) what big applications have been done? what arguments were used?
2) do the arguments hold up in retrospect?

While 1) can be answered by references, I am particularly hoping that some of the people behind these projects would attempt to answer 2) as best as possible.


This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.