Re: [isabelle] Creating ML data is slow. No sharing/caching?



> Preprocessing of the code equations can take considerable time in some
> cases, but AFAIK Florian implemented a cache for code equations. Thus, a
> second invocation of "value [code]" for the same term should not suffer
> from preprocessing time.

Note that the conversions involving code generation are built like
sandwiches on top of each other, each adding a further level of
preparation and postpolishing to the other.  For illustration, Try to
downtrack the dependencies of e.g. Code_Runtime.dynamic_value.  You
might consider plugging in some time measurements there to get an idea
what is really going on.

	Florian

> 
> Best,
> Andreas
> 
> On 03/09/15 16:11, Makarius wrote:
>> On Wed, 29 Jul 2015, Andreas Lochbihler wrote:
>>
>>> If you really want to measure the evaluation time, you should either
>>> make sure that your
>>> evaluations result in small terms or use the ML interface which
>>> avoids most of the
>>> pretty-printing overhead.  For example,
>>>
>>>   value [code]
>>>     "let x = add_expr (fold And (map Prim [1..10]) (Prim 0)) (map Prim
>>>     [1..1000]) in ()"
>>>
>>> is pretty fast (note that HOL's let gets compiled to ML's let, so the
>>> let binding is
>>> always evaluated) and so is
>>>
>>>   definition "four = add_expr (fold And (map Prim [1..10]) (Prim 0))
>>> (map
>>>   Prim [1..1000])"
>>>   ML â @{code four} â
>>>
>>> I do not know the details of the implementation in Isabelle/Scala, so
>>> I cannot tell why
>>> 2 is much faster than 4 in terms of pretty printing. My guess is that
>>> 2 consists of a
>>> long list of values, i.e., you have a fairly flat term structure and
>>> special notation
>>> for lists (which suppresses a lot of the auxiliary information passed
>>> along with the
>>> constants).
>>
>> I've done some ML timing here
>> http://isabelle.in.tum.de/repos/isabelle/file/Isabelle2015/src/HOL/Tools/value.ML#l58
>>
>> and here
>> http://isabelle.in.tum.de/repos/isabelle/file/Isabelle2015/src/HOL/Tools/value.ML#l61
>>
>> and here
>> http://isabelle.in.tum.de/repos/isabelle/file/Isabelle2015/src/HOL/Tools/value.ML#l64
>>
>>
>> The timing for value_maybe_select is about the same as the pretty_term
>> + writeln
>> together.  Looking at the profile of value_maybe_select, there are a
>> lot of Simplifier
>> operations, which probably belong to the preparation stage of the
>> code-generator,
>> including preparation of the result before actual printing.
>>
>> So my guess is that "let x = add_expr ..." is so much faster, because
>> its result is so
>> small, independtly of printing.
>>
>>
>> In the above timing experiments, Isabelle/Scala is not measured at
>> all. The front-end is
>> asynchronous.  It might burn a lot of CPU cycles on its own for
>> displaying big output, but
>> that is not directly seen here. I don't think this particularly
>> example is big enough to
>> be a big problem.  Other examples can bomb the front-end as usual.
>>
>>
>>      Makarius
> 

-- 

PGP available:
http://home.informatik.tu-muenchen.de/haftmann/pgp/florian_haftmann_at_informatik_tu_muenchen_de

Attachment: signature.asc
Description: OpenPGP digital signature



This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.