[isabelle] Sandboxed evaluation of Isabelle theories?



Dear List,

I am in the process of overhauling the Isabelle tutorial¹ at the
University of Karlsruhe. Since we already have a somewhat nice system
for submitting programming solutions (the Praktomat²) I am considering
to use that for the submitted Isabelle solutions as well. For just
submitting and commenting the files this would work well.

The next step would be to automatically check the files at submission,
e.g. run them through isabelle-process or isabelle build. But is that
safe? I believe not, as inside ML {* ... *}, arbitrary missiles could be
launched.

Is there an option that disables all such unsafe features, that would
allow running a theory from an untrusted source?

(I’m ignoring resources issues here; these could be enforced by other
means.)

Thanks,
Joachim

¹ http://pp.ipd.kit.edu/lehre/SS2013/tba/
¹ http://pp.ipd.kit.edu/project.php?id=34 and 
  https://github.com/danielkleinert/Praktomat

-- 
Dipl.-Math. Dipl.-Inform. Joachim Breitner
Wissenschaftlicher Mitarbeiter
http://pp.ipd.kit.edu/~breitner

Attachment: signature.asc
Description: This is a digitally signed message part



This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.