[isabelle] Problems with auto (at ML level)

Dear all,

I found a somewhat strange difference in the behaviour of auto_tac
between an ISAR-level execution and an ML-level execution:

lemma aux :
"triangle ?X3X79 ?X2X77 ?X1X75 \<and> ?X3X79 ~= ?X2X77 \<and> ?X2X77 ~= ?X1X75 \<and> ?X3X79 ~= ?X1X75"
apply(auto intro: abs_cases )

does the trick: searching between several intro-rules one possible solution that instantiates
the meta-variables conveniently.

In contrast:
ML{* fun auto_solver thms {prems: Thm.thm list, context: Proof.context} =
                    auto_tac ((local_claset_of context) addIs thms,
                                       local_simpset_of context)*}

lemma aux :
"triangle ?X3X79 ?X2X77 ?X1X75 \<and> ?X3X79 ~= ?X2X77 \<and> ?X2X77 ~= ?X1X75 \<and> ?X3X79 ~= ?X1X75" apply(tactic "(auto_solver (thms\"abs_cases\")){prems=[], context= @{context}}")

looks as if "auto intro!: abs_cases" has been fired; backtracking is reduced to just choosing
the first match and getting stuck.

The full example is in the appendix.

Can anybody point me out where I oversimplified Isar's way to control auto_tac ?

Best regards,


Attachment: Trian.thy
Description: Binary data

This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.