Re: [isabelle] Sandboxed evaluation of Isabelle theories?



Given the combination of an LCF architecture and ML's static scoping rules, I don't see what harm could be done by the insertion of arbitrary ML code. In any event, anybody who can master Isabelle's APIs sufficiently to do anything interesting by inserting ML code deserves top marks for your course :-)

Larry Paulson


On 12 Feb 2013, at 08:42, Joachim Breitner <breitner at kit.edu> wrote:

> Dear List,
> 
> I am in the process of overhauling the Isabelle tutorial¹ at the
> University of Karlsruhe. Since we already have a somewhat nice system
> for submitting programming solutions (the Praktomat²) I am considering
> to use that for the submitted Isabelle solutions as well. For just
> submitting and commenting the files this would work well.
> 
> The next step would be to automatically check the files at submission,
> e.g. run them through isabelle-process or isabelle build. But is that
> safe? I believe not, as inside ML {* ... *}, arbitrary missiles could be
> launched.
> 
> Is there an option that disables all such unsafe features, that would
> allow running a theory from an untrusted source?
> 
> (I’m ignoring resources issues here; these could be enforced by other
> means.)
> 
> Thanks,
> Joachim
> 
> ¹ http://pp.ipd.kit.edu/lehre/SS2013/tba/
> ¹ http://pp.ipd.kit.edu/project.php?id=34 and 
>  https://github.com/danielkleinert/Praktomat
> 
> -- 
> Dipl.-Math. Dipl.-Inform. Joachim Breitner
> Wissenschaftlicher Mitarbeiter
> http://pp.ipd.kit.edu/~breitner






This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.