Re: [isabelle] Splicing runtime ML values into Isar



First of all, thanks for your valuable comments.

> There are endless possibilities, but it also needs some efforts to make
> concrete applications work robustly with the endless features we have
> already accumulated.

Indeed. That's why I posted this particular use case here in order to
get an idea of whether this is something people would like to use.

>   * There is no need to poke into global refs.  The implicit ML context
>     can be used to transport values in and out of ML "eval" in a
>     value-oriented manner.  See the implementation of the "tactic" proof
>     method, with its Method.set_tactic slot.

While refactoring my code to use theory/context data slots, I got
carried away and implemented a general mechanism for typed evaluation. I
uploaded it to a dedicated repository: <http://git.io/vINfn>

It can be used like this:

  typed_evaluation foo = âintâ

  Typed_Evaluation.eval @{token foo} â3â @{context}

There might be a way to get rid of the explicit registration of the
"token", but I haven't found one yet.

>   * Term translations operate on rather raw parse trees.  You have already
>     discovered that in the examples: referring to a local "x" is not
>     immediately clear.  It would require some mechanism to "protect"
>     already internal terms within the parse tree, but that would have to
>     cooperate properly with the other syntax phases and the term "check"
>     phases (type inference etc.).

I haven't quite understood the mechanics yet. It appears that already
the parsing fails, which makes me wonder whether it'd be sufficient to
leave a spliced ML expression uninterpreted until checking. This would
probably warrant a separate check phase.

>   * Morphisms, which are particularly important for attribute expressions,
>     introduce another dimension of higher abstract nonsense.  One needs to
>     look closely that embedded ML works with that, but not all aspects are
>     properly implemented in the system and existing antiquotations.
>     E.g. ML antiquotations themselves ignore morphisms.

I don't understand at all how morphisms come into play here. Are you
suggesting that the embedded ML should somehow have access to a
morphism? Which morphism?

Cheers
Lars




This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.