On Tue, 21 May 2013, Jesus Aransay wrote:
in the attached file you can find a simple example of a list of lists representing a 250 * 250 matrix, which takes really long time to be processed in Isabelle 2013
actually, I gave up after a few minutes of processing without success.
I have experimented with this a little, see my version of Random_Binary_Matrix.thy that is included here. This takes about 45s on my cheap i7 machine.
My first idea was to bypass the concrete syntax parser via the parse translation, and turn a numeral token directly into some pre-term. It turned out that parsing alone takes only 30s for your example, but the remaining "check" phase for type-reconstruction is rather slow (lots of unification of types in several stages). I had also some refinements of the syntax translations, to produce less concrete constants (many bounds with some beta redex, to simulate a local let within the term); another refinement was to avoid polymorphism in the constants, but neither helped very much.
So the second approach uses "make_bits" with the "define" function in Isabelle/ML. Such plain Isabelle/ML operations are less cryptic than syntax operations. Its remaining slowness is due to generous re-certification of terms in certain parts of the system for basic definitions. You will probably face other more serious problems when working with the huge formal bits by other means, probably also in the code generator.
Here you see that Isabelle was not built with big data applications in mind. Scalability to arbitrary large input is never for free: it needs to be addressed explicitly in the implementation, and usually involves trade-offs concerning other things. Isabelle is better at ridiculous generality than ridiculous speed.
Description: Binary data