Re: [isabelle] Performance considerations of locales

On 23/07/2021 08:11, Norbert Schirmer via Cl-isabelle-users wrote:
> In our applications we have a wide spread use of locales, both on the Isar
> toplevel
> as well as in Isabelle/ML were locales are auto generated and certain
> definitions are made and 
> theorems are proved within these locales.
> Recently we hit the tipping point, where actually more cpu-time is used for the 
> declaration of locales compared to the actual proof load that is performed
> within the locales.

In ancient times, we did have the understanding that proofs require most of
the time, and the surrounding setup is neglectable. In recent years, our
theory infrastructure has become so rich, that it often requires more time to
build than the proofs.

So we need to take more care, to trim that down again.

> I did some analysis to get a better understanding what actually happens and
> want to initiate a discussion on how this situation might be improved in the
> future.
> First some high level description of our use of locales which triggered the
> tipping point:
> 1. Locales are used to structure and modularize our work. 
> 2. We stay abstract: Our clear focus is on declaring new
>   locales on the basis of existing locales rather then interpreting locales.
> 3. Most of the time an abstract entity within a locale aka fixes x::'a stays
> the same in
>   the complete locale hierarchy. It is neither renamed nor is it's type
> changed when being imported to
>   another locale. It might be shared among multiple locales.

Points 1 + 2 + 3 seem to suggest that most of the locale interpretation
add-ons from the past 10-15 years are not required. They are nonetheless
central to how locales work today. "Shortcuts" might be possible, but it is
difficult to say what really can or should be done: We need to resist from
"improvements" that move towards a terminal state where nobody understands how
things work.

> 4. Within a locale we make definitions and proof theorems which are noted. 
>   A typical right hand side (rhs) of a definition can be a sizeable term
> containing polymorphic constants. 
>   For example think of a sequence of about 100 monadic binds in a state-monad.
> 5. We use attributes on definitions as well as theorems to maintain our data
> for proof automation. 

Points 4 + 5 are more particular to your application, and look like the true
focus of performance improvements.

> I come to the conclusion that the actual 
> hot-spot is the omnipresent instantiation of
> locales. This happens in the roundup of locale expressions, both when defining
> new locales as well as 
> when entering the context of a locale. Profiling highlights low-level
> functions like Term.fold_atyps,
> Term_Subst.generalizeT_same, Term_Subst.inst_same...

Instantiations --- especially of types within terms --- often show up
prominently in profiles. Over the years, we have made a lot of performance
tuning in this respect. My most recent attempt is here
although it had hardly any measurable impact on your examples or other
applications. In the worst case, it can even become a bit slower, due to more
heap garbabe produced by the underlying 2-3 trees, in contrast to plain lists.

In recent years, I have come to the conclusion that we could remove excessive
redundancy of types within terms, via the "typargs" representation of Consts.
E.g. instead of overloaded Const "plus" with type instance "T => T => T", it
would be only "plus" applied to a single typearg T. We have already support
for this view within Consts.T of the theory (or Proof.Context).

What remains is a small exercise in cleaning up Isabelle/ML tools after > 35
years of Gordon-style Const term constructors, with their explicit type
instances. The following ML antiquotations may help to get to that point
within a few years:

> This process becomes so costly in our application because of (4) and (5) and
> is especially unsatisfying
> for the user because of (3): From a user perspective nothing has to be
> instantiated.
> It turns out, that by using attributes (5) the optimisation of "lazy notes" is
> unfortunately 
> ineffective (in contrast a theorem with no attributes is represented as a lazy
> value and hence 
> only instantiated when it is actually used, this speeds up entering a locale
> or composing locales).

Note that attributes are just a special case of Local_Theory.declaration in
Isabelle/ML or 'declaration' in Isabelle/Isar: after applying a morphism, you
add your own data to the context; your tools pick up the data later on.

An attribute always has a visible "thm" parameter that is always transformed
by the morphism on the spot.

In contrast, a declaration can control precisely what happens when. Here;f79dfc7656ae
is an example where primitive definitions are stored by there LHS-term, while
the thm LHS=RHS is only materialized via the morphism when required in a
corresponding simproc.

This demonstrates, how much flexibility the regular local theory user-space
offers, but it requires some understanding of the big picture.

> I guess there is a huge design space to improve the situation. Here are some
> of my thoughts:
> I. Keep the terms small
> a) Manual workaround: Instead of a local definition we could manually make a
> global definition of the
>    large rhs, and then only make a local definition having the (folded) global
> one as rhs. 
>    This is more or less the first half of what a local definition does anyway.
> b) Infrastructure: have an option to the local definition that only introduces
> the abbreviation 
>    but does not note the instantiated equation in the target context.
> II. Improve the use case for the "canonical" locale instances (points 2 + 3)
> by some 
>    sort of caching or pre-evaluated instances.
> III. Extending the lazy notes idea to "lazy attributes", giving user level
> control to what is already relevant
>     for the declaration of new locales and what is only relevant for doing
> proofs later on.
> IV. Make instantiation cheaper, e.g. by changing the low-level representation
> of cterms (like 
>    environment + term-skeleton instead of a fully instantiated term; ML
> antiquotations to
>    match on cterms and construct cterms might bring the convenience of
> programming with 
>    terms to cterms, without having to fully convert between terms and cterms
> all the time).

My explanations and experiments should already correlate with most of these
ideas, while avoiding huge upheavals in the underlying infrastructure.

Your point IV sounds very ambitious: cterms could certainly benefit from many
reforms, like open bounds as frees (to simplify e.g. Thm.dest_abs). But most
of this will introduce a lot of extra complexity, and might never materialize.

Right now, I hope that Const-typargs within terms would be sufficient for all
kinds of conceptual simplifications and performance improvements everywhere. I
leave it as an exercise for Isabelle/ML tool builders, to update their stuff
after the next release, to use Type/Const antiquotations everywhere.

> Find attached three benchmark theories illustrating some aspects:
> Benchmark_poly.thy: large terms within locale using polymorphic constants
> Benchmark_mono.thy: monomorphic variant
> Benchmark_global.thy: Workaround (I.a), keeping terms in definitions and
> theorems (notes) small.

I have experimented with these benchmarks a bit: often they did not even work
with the regular 16 GB heap model of x86_64_32.

Maybe you can rephrase most of this directly with the 'def' command from;f79dfc7656ae
(it should also work with current Isabelle2021).


This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.