Re: [isabelle] Sandboxed evaluation of Isabelle theories?



It depends on the definition of "harm". The student may be able to erase all of the professor's files, and that's certainly bad. If your students are really that crazy, you could run the assessments on a dedicated machine. But it would take a pretty clever series of UNIX calls to trick the system into thinking you had finished a proof.

Of course, for a tutorial course, it would be okay to prohibit access to ML altogether. That shouldn't be difficult to achieve.

Larry Paulson


On 12 Feb 2013, at 12:13, Makarius <makarius at sketis.net> wrote:

> On Tue, 12 Feb 2013, Lawrence Paulson wrote:
> 
>> Given the combination of an LCF architecture and ML's static scoping rules, I don't see what harm could be done by the insertion of arbitrary ML code. In any event, anybody who can master Isabelle's APIs sufficiently to do anything interesting by inserting ML code deserves top marks for your course :-)
> 
> That's a cource where top marks are very easy to get:
> 
>  ML {* Isabelle_System.bash "echo 666" *}
> 
> It is left as an exercise to put something really dangerous into the above shell script.
> 
> 
> To tighten up the system a little bit, there is isabelle-process option -S to disallow such gross insecurities.
> 
> If one wanted to take that issue very seriously, one could sandbox the ML compiler invocation that is embedded into Isabelle/Isar.  The infrastructure is in principle all there, but we don't have concrete applications for that, and nobody paying large sums to make it work and keep it working over time.
> 
> 
> 	Makarius






This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.