Re: [isabelle] Creating ML data is slow. No sharing/caching?

On Wed, 29 Jul 2015, Andreas Lochbihler wrote:

If you really want to measure the evaluation time, you should either make sure that your evaluations result in small terms or use the ML interface which avoids most of the pretty-printing overhead. For example,

  value [code]
    "let x = add_expr (fold And (map Prim [1..10]) (Prim 0)) (map Prim
    [1..1000]) in ()"

is pretty fast (note that HOL's let gets compiled to ML's let, so the let binding is always evaluated) and so is

  definition "four = add_expr (fold And (map Prim [1..10]) (Prim 0)) (map
  Prim [1..1000])"
  ML â @{code four} â

I do not know the details of the implementation in Isabelle/Scala, so I cannot tell why 2 is much faster than 4 in terms of pretty printing. My guess is that 2 consists of a long list of values, i.e., you have a fairly flat term structure and special notation for lists (which suppresses a lot of the auxiliary information passed along with the constants).

I've done some ML timing here and here
and here

The timing for value_maybe_select is about the same as the pretty_term + writeln together. Looking at the profile of value_maybe_select, there are a lot of Simplifier operations, which probably belong to the preparation stage of the code-generator, including preparation of the result before actual printing.

So my guess is that "let x = add_expr ..." is so much faster, because its result is so small, independtly of printing.

In the above timing experiments, Isabelle/Scala is not measured at all. The front-end is asynchronous. It might burn a lot of CPU cycles on its own for displaying big output, but that is not directly seen here. I don't think this particularly example is big enough to be a big problem. Other examples can bomb the front-end as usual.


This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.