Re: [isabelle] Approximation tactic for very large numbers
Ah sorry, I misread. Try it again with 160000. ;)
My current workaround is to first do the computation in ML using
hardware floating-point numbers. If the result is finite (i.e. not NaN
or Infinity) and the proposition holds for the resulting numbers, I call
It's not pretty and certainly not "complete", but it handles all of my
use cases pretty well so far.
The only proper solution I can think of would be to add some kind of
precision for the exponent as well, and if an exponent exceeds that
number, the computation is aborted with an error.
> As a side note on the tactic in general, if you care about performance: In my experience,
> you get good results (here about 5s) by setting the precision to about half the precision of the native integer type (i.e., 15 for x86 and 30 for x86_64).
That's not an issue here since I chose 10 anyway. That's usually enough
for my needs.
This archive was generated by a fusion of
Pipermail (Mailman edition) and