Re: [isabelle] Performance bottleneck in Theory_Data.merge

I think that the position information there boils down to
that fetched by
  let val pos = Position.thread_data () in
in the definition of the function Theory_Data' in src/Pure.context.ML.

I guess means that the data tables which don't have one are
defined at points at which the local thread_data position
wasn't set. It seems to link to ML blocks and to the position
of ML_file commands pretty consistently for those kinds, so probably
this is theory data defined in some special ML file, maybe one of the
ones in Pure. That's not much help, is it?

If you're very keen to figure it out, I'd be tempted to locally edit
context.ML to expose an alternative interface to Theory_Data that lets
the client suggest a name which appears with the position info in timing.

I was motivated to investigate this a little because I've had issues with
this in the past.

Best regards,

On 2021-07-13 16:02, Norbert Schirmer via Cl-isabelle-users wrote:

I try to track down a performance bottleneck in theory merge.

I enabled timing information by

ML “val _ = Context.timing := true”

Unfortunately the position information of the “slow" data slot is
not available, I just get the message:

44.390s elapsed time, 88.221s cpu time, 0.000s GC time

Compared to other data slots where the position information is
available as a link or a file reference:

0.016s elapsed time, 0.030s cpu time, 0.000s GC time

Theory_Data.merge (file "global_theory.ML")
0.264s elapsed time, 0.523s cpu time, 0.000s GC time

Any idea how to continue?

What is the reason that the position information is lost? Could this
be a hint which kind of data slot it could possibly be?



Norbert Schirmer (nschirmer at
 SEG Formal Verification

This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.