Re: [isabelle] Target language bindings for rat



My own, perhaps somewhat naïve, thoughts on this: What is the advantage
of using target-language rats? Are they really faster than the code that
Isabelle produces?

Intuitively, I would think that the basic operations on rational numbers
translate to operations on integers very clearly and obviously and that
there is not much room for creativity – except for the question of if
and when to reduce the numerator and denominator.

A FFI-based implementation using the GNU Multiple Precision Arithmetic
Library (GMP) might be faster because it reduces overhead, but even
Haskell (which uses GMP for its integers) does not do that and relies on
‘pure’ rational numbers implemented as a datatype instead.

On the question of normalisation, let me remark that the GMP always
assumes that rats are in canonical form and also brings all results into
canonical form, and the GMP people generally seem know their stuff, so
this is probably a good idea. (I think they also offer ‘raw’ functions
without this normalisation though)


Cheers,

Manuel


On 18/12/15 03:24, W. Douglas Maurer wrote:
> 
> The main problem with standard treatments of rational numbers, as I see it, is: how often do you reduce to lowest terms? For example: (1/3+1/6)+1/2 -- do you reduce 1/3+1/6 to 1/2 immediately, and then add 1/2 to get 2/2, reducing to 1? Or do you leave 1/3+1/6 as 9/18 and then add 1/2 to get 36/36, reducing only at the end? The problem is that reducing to lowest terms is slow, in the worst case. You can ask the user when to reduce, potentially saving time, but would users necessarily know the best times to reduce? Is anyone aware of a good general solution to this problem? -WDMaurer
> 
> Sent from my iPhone
> 
>> On Dec 17, 2015, at 6:38 AM, cl-isabelle-users-request at lists.cam.ac.uk wrote:
>>
>> Send Cl-isabelle-users mailing list submissions to
>>    cl-isabelle-users at lists.cam.ac.uk
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>>    https://lists.cam.ac.uk/mailman/listinfo/cl-isabelle-users
>> or, via email, send a message with subject or body 'help' to
>>    cl-isabelle-users-request at lists.cam.ac.uk
>>
>> You can reach the person managing the list at
>>    cl-isabelle-users-owner at lists.cam.ac.uk
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of Cl-isabelle-users digest..."
>>
>>
>> Today's Topics:
>>
>>   1.  Target language bindings for rat (Andreas Lochbihler)
>>   2. Re:  Target language bindings for rat (Wenda Li)
>>   3. Re:  Target language bindings for rat (Michael Norrish)
>>   4.  Multiple (structure) arguments (Andreas Lochbihler)
>>   5.  Some remarks on natural deduction and axiomatic set    theory
>>      (Ken Kubota)
>>
>>
>> ----------------------------------------------------------------------
>>
>> Message: 1
>> Date: Wed, 16 Dec 2015 17:48:23 +0100
>> From: Andreas Lochbihler <andreas.lochbihler at inf.ethz.ch>
>> Subject: [isabelle] Target language bindings for rat
>> To: isabelle-users <isabelle-users at cl.cam.ac.uk>
>> Message-ID: <567195D7.40502 at inf.ethz.ch>
>> Content-Type: text/plain; charset="utf-8"; format=flowed
>>
>> Dear all,
>>
>> I am looking for bindings of rational numbers to the target languages of the code 
>> generator. Has anyone done something in this direction? E.g., something analogous to 
>> Code_Numeral.integer and Code_Target_Int.thy?
>>
>> Haskell and OCaml support arbitrary-precision rational numbers in their libraries 
>> (Rational and Num.num) and there is a library for Scala (https://github.com/non/spire), 
>> but I have not found anything for SML. Does anyone know of such a library for SML?
>>
>> Best,
>> Andreas
>>
>>
>>
>> ------------------------------
>>
>> Message: 2
>> Date: Wed, 16 Dec 2015 17:39:04 +0000
>> From: Wenda Li <wl302 at cam.ac.uk>
>> Subject: Re: [isabelle] Target language bindings for rat
>> To: Andreas Lochbihler <andreas.lochbihler at inf.ethz.ch>
>> Cc: isabelle-users <isabelle-users at cl.cam.ac.uk>
>> Message-ID: <dbb2334c756931747bbd17141a99cf59 at cam.ac.uk>
>> Content-Type: text/plain; charset=US-ASCII; format=flowed
>>
>> Hi Andreas,
>>
>> As far as I know, src/Tools/rat.ML is an implementation of rational 
>> number in Isabelle/ML and MetiTarski 
>> (https://bitbucket.org/lcpaulson/metitarski/src) uses another 
>> implementation that is inherited from John Harrison somehow.
>>
>> I have considered this question before, but considering there is no 
>> standard implementation of rational number in the standard PolyML 
>> library, I thought it is risky to do such binding...
>>
>> Hope this helps,
>> Wenda
>>
>>> On 2015-12-16 16:48, Andreas Lochbihler wrote:
>>> Dear all,
>>>
>>> I am looking for bindings of rational numbers to the target languages
>>> of the code generator. Has anyone done something in this direction?
>>> E.g., something analogous to Code_Numeral.integer and
>>> Code_Target_Int.thy?
>>>
>>> Haskell and OCaml support arbitrary-precision rational numbers in
>>> their libraries (Rational and Num.num) and there is a library for
>>> Scala (https://github.com/non/spire), but I have not found anything
>>> for SML. Does anyone know of such a library for SML?
>>>
>>> Best,
>>> Andreas
>>
>> -- 
>> Wenda Li
>> PhD Candidate
>> Computer Laboratory
>> University of Cambridge
>>
>>
>>
>> ------------------------------
>>
>> Message: 3
>> Date: Wed, 16 Dec 2015 21:04:47 +0000
>> From: Michael Norrish <Michael.Norrish at nicta.com.au>
>> Subject: Re: [isabelle] Target language bindings for rat
>> To: Isabelle Users Mailing List <cl-isabelle-users at lists.cam.ac.uk>
>> Message-ID: <79622222-1250-4610-9A23-FD45D200FBA2 at nicta.com.au>
>> Content-Type: text/plain; charset="us-ascii"
>>
>> There is an implementation used in HOL4 at
>>
>> https://github.com/HOL-Theorem-Prover/HOL/blob/master/src/portableML/Arbrat.{sig,sml}
>>
>>
>>
>> Michael
>>
>>> On 17 Dec 2015, at 03:58, Andreas Lochbihler <andreas.lochbihler at inf.ethz.ch> wrote:
>>>
>>> Dear all,
>>>
>>> I am looking for bindings of rational numbers to the target languages of the code generator. Has anyone done something in this direction? E.g., something analogous to Code_Numeral.integer and Code_Target_Int.thy?
>>>
>>> Haskell and OCaml support arbitrary-precision rational numbers in their libraries (Rational and Num.num) and there is a library for Scala (https://github.com/non/spire), but I have not found anything for SML. Does anyone know of such a library for SML?
>>>
>>> Best,
>>> Andreas
>>
>> ________________________________
>>
>> The information in this e-mail may be confidential and subject to legal professional privilege and/or copyright. National ICT Australia Limited accepts no liability for any damage caused by this email or its attachments.
>>
>>
>>
>> ------------------------------
>>
>> Message: 4
>> Date: Thu, 17 Dec 2015 12:36:09 +0100
>> From: Andreas Lochbihler <andreas.lochbihler at inf.ethz.ch>
>> Subject: [isabelle] Multiple (structure) arguments
>> To: isabelle-users <isabelle-users at cl.cam.ac.uk>
>> Message-ID: <56729E29.7000006 at inf.ethz.ch>
>> Content-Type: text/plain; charset="utf-8"; format=flowed
>>
>> Dear syntax experts,
>>
>> In HOL-Algebra, there are multiple cases where two structures are fixed in the same 
>> context and declared as (structure) -- usually in the homomorphism locales. However, it 
>> seems as if the second (structure) declaration cannot be used, because there are all the 
>> \<^bsub>...\<^esub> annotations for the second structure.  This is in line with the 
>> implementation manual (Sect. 7.4.3), which states that "\<index>" always refers to the 
>> first declared structure. This makes sense as it would not be clear in general which 
>> variable to pick for the reference (although type checking might disambiguate things in 
>> many practical cases).
>>
>> IMO it would make sense to support constants that depend on several instances (for example 
>> an operation on homomorphisms) and that thus should reference multiple structure 
>> arguments. Unfortunately, I cannot even declare the mixfix syntax like in
>>
>>   "FOO\<index>\<index>"
>>
>> because the parser complains about "Duplicate index arguments (\?)". Is there a way to use 
>> reference several implicit structures for a constant? If not, how hard would it be to 
>> extend the index referencing to multiple references?
>>
>> Best,
>> Andreas
>>
>>
>>
>> ------------------------------
>>
>> Message: 5
>> Date: Wed, 16 Dec 2015 19:29:39 +0100
>> From: Ken Kubota <mail at kenkubota.de>
>> Subject: [isabelle] Some remarks on natural deduction and axiomatic
>>    set    theory
>> To: cl-isabelle-users at lists.cam.ac.uk
>> Cc: "Prof. Lawrence C. Paulson" <lp15 at cam.ac.uk>,    "Prof. Peter B.
>>    Andrews" <andrews at cmu.edu>,    Gottfried Barrow <igbi at gmx.com>,    "James R.
>>    Meyer" <james0508 at jamesrmeyer.com>
>> Message-ID: <B43EAE87-795B-4A55-9A4D-0A694050CBB4 at kenkubota.de>
>> Content-Type: text/plain; charset=utf-8
>>
>> Dear Gottfried Barrow and List Members,
>>
>> Thank you for your comment at
>>    https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00023.html
>>
>> concerning my note available at
>>    https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00018.html
>>
>>
>> Let me respond to the last three paragraphs of your comment quoted below which 
>> summarize the most important questions.
>>
>>
>> 1. There is no relation between a simple minimum core logic and the 
>> expressiveness of the formal language. A modular software design guarantees a 
>> clear separation of functionality such that the logic is concentrated in the 
>> logical core and no enhancements of other software layers (i.e., user 
>> interface, proof tactics) will have an effect on the correctness of the logic. 
>> This concept is, according to Andrei Popescu, shared by all implementations in 
>> discussion here: "A small logic kernel is an implementation-independent virtue 
>> of Isabelle/HOL as well as of all the Gordon-HOL systems."
>>
>> "It is the logical reliability due to reduction that builds the core of the 
>> LCF-approach to theorem proving (or LCF-style full expansiveness [cf. Gordon, 
>> 2000, p. 178]): 'All proofs are ultimately performed in terms of a small set of 
>> primitive inferences, so provided this small logical kernel is correct the 
>> results should be reliable.' [Harrison, 2009, p. 60]" [Kubota, 2015a, pp. 12 f.]
>>
>> The idea behind the LCF-approach appealing also to those who do not share 
>> metatheoretical concepts such as the semantic approach (model theory) is to 
>> reduce the set of rules and elements, such that the logical core is minimized 
>> without loss of expressiveness. I called it the "The Principle of Reductionism" 
>> [Kubota, 2015a, p. 11].
>>
>> In Peter B. Andrews' logic Q0 this principle is - by the use of syntactical 
>> means only - consequently carried out, such that the basic means of the 
>> language are reduced without loss of expressiveness to
>> - a single rule of inference,
>> - a single variable binder,
>> - only two constants and
>> - only two types,
>> obtaining "definability of all of the propositional connectives, as well as all 
>> of the quantifiers (universal, existential and uniqueness quantifier)" and 
>> derivability of all other rules including the rule of modus ponens as mentioned 
>> earlier at
>>    https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-July/msg00147.html
>>
>> In other words, with this simple minimum core logic of Q0 all of formal logic 
>> and large parts of mathematics, and with type variables (polymorphic type 
>> theory) and the binding of type variables with lambda (dependent type theory), 
>> presumably all of mathematics can be construed, expressed and proved purely 
>> syntactically. Due to its simplicity, the logical kernel becomes extremely 
>> small and logically robust, i.e., mathematically safe.
>>
>> In the R0 implementation, the propositional connectives therefore are not part 
>> of the logical core, but their definitions are "outsourced" into a file and, 
>> for example, the rule of modus ponens into a separate proof template file. In 
>> fact, the expressiveness of R0 as a logistic system is stronger than that of 
>> Isabelle/HOL. Isabelle/HOL allows for type variables and therefore has the 
>> expressiveness of polymorphic type theory, but only R0 allows, in addition to 
>> that, the binding of type variables with lambda as abstraction operator and 
>> therefore enables the construction of types required for dependent type theory.
>>
>>
>> 2. Any axiom schema requires a meta-level for instantiation. Since my 
>> philosophical background rejects any sort of metatheories (I prefer to speak of 
>> arithmetization of mathematics instead of metamathematics, refuting any 
>> foundational or legitimating role of metatheories), an axiom schema - like any 
>> other meta-construction - always reveals a deficit. Whereas Andrews' logic Q0 
>> requires an "Axiom Schema 3" [Andrews, 2002, p. 213] (with "syntactical 
>> variables ranging over type symbols" [Andrews, 2002, p. 210]) for the Axiom of 
>> Extensionality, R0 uses regular type variables for this axiom [cf. Kubota, 
>> 2015a, pp. 39, 351], which were not available in Q0 being a simple type theory, 
>> i.e., without type variables.
>>
>> Moreover, axiomatic set theory as a whole is subject to critique: "Axiomatic 
>> set theory uses axioms to justify the existence of certain sets. Unless the 
>> proof of existence from these axioms is provided, any construction cannot be 
>> introduced into the domain of discourse. This approach, however, is not 
>> systematic and subject to arbitrary conditions. For example, the most common 
>> formulation, the Zermelo-Fraenkel set theory with the Axiom of Choice (ZFC), 
>> comprises axioms for the empty set, union, one-element sets, power sets etc., 
>> trying to cover the standard sets used in mathematics. But there always remain 
>> legitimate sets not contained by these axioms; for example it is well known 
>> that large cardinal numbers and certain universes cannot be treated within ZFC 
>> (without assuming additional axioms), although from a purely logical point of 
>> view there is no difference between the mathematical objects covered by ZFC and 
>> those not covered.
>>
>> This problem has its origin in the approach itself, as axiomatic set theory 
>> makes use of non-logical axioms to establish sets, thus trying to avoid 
>> paradoxes via content restrictions instead of the proper specification of the 
>> formal language. Opting this method, axiomatic set theory including ZFC is 
>> bound to remain a preliminary and auxiliary approach, but not the foundation of 
>> mathematics." [Kubota, 2015a, p. 18].
>>
>>
>> 3. Russell's paradox ("the set of all sets that are not members of themselves", 
>> or originally: "Let w be the predicate: to be a predicate that cannot be 
>> predicated of itself. [...] Likewise there is no class (as a totality) of those 
>> classes which, each taken as a totality, do not belong to themselves." 
>> [Russell, 1902, p. 125]) with the two constitutive properties of antinomies, 
>> self-reference and negativity (negation), ruled out Cantor's naive set theory 
>> as a foundation of mathematics. If Goedel's construction of a proposition 
>> involving self-referencing negativity ("I am not provable", or originally: "We 
>> therefore have before us a proposition that says about itself that it is not 
>> provable" [G?del, 1931, p. 598]) has to be considered as an antinomy, further 
>> research will have to find out whether it rules out natural deduction 
>> (including sequent calculus) for the foundation of mathematics as a whole or 
>> whether a clear distinction of the object language and the meta-language can be 
>> introduced into natural deduction.
>>
>> The strict distinction of the object language and the meta-language was so 
>> important to Church that he emphasized it a second time in his article: "We 
>> must, of course, distinguish between formal theorems, or theorems of the 
>> system, and syntactical theorems, or theorems about the system, this and 
>> related distinctions being a necessary part of the process of using a known 
>> language (English) to set up another (more exact) language." [Church, 1940, p. 
>> 61]
>>
>> In natural deduction, as for example in Isabelle, metatheorems (in terms of 
>> Hilbert-style systems) become part of the formal language (the object language) 
>> itself. So from the point of view of axiomatic (Hilbert-style) deductive 
>> systems, the expressiveness in natural deduction is shifted towards the 
>> meta-level, possibly at the cost of expressiveness at the object level. This 
>> caused me to introduce an own terminology with the notions of "object logic" 
>> and "meta-logic" in my second publication on Goedel [cf. Kubota, 2015] until 
>> realizing that the definitional line between "object logic" and "meta-logic" is 
>> identical with that between axiomatic (Hilbert-style) deductive systems and 
>> natural deduction, having found the number of allowed occurrences of the 
>> deduction symbol (turnstile) in a theorem as a formal criterion. The 
>> translation mechanism from a theorem to the existence of its proof (or vice 
>> versa) in Goedel's First Incompleteness Theorem is in Paulson's presentation 
>> his theorem 'proved_iff_proved_PfP' - called (P) in my article -, in Andrews' 
>> presentation the implicit rule used for step 7101.4. Whereas in Andrews' system 
>> this implicit rule immediately results in inconsistency [cf. Kubota, 2015, pp. 
>> 10 f.], I am not sure whether this applies to natural deduction, too. Because 
>> of a limited expressiveness at the object level, the antinomy might be 
>> construed without causing an openly visible inconsistency in the formal 
>> language, since "the two occurrences of the deduction symbol [...] remain an 
>> obstacle for obtaining a paradox [i.e., an inconsistency] in the metalogic 
>> [i.e., in natural deduction]" [Kubota, 2015, p. 14]. Then, "G?del's first 
>> incompleteness theorem: If consistent, our theory is incomplete." (Paulson) is 
>> rendered from a hypothetical judgement to a trivial tautology in the form of a 
>> conclusio ex falso, since the theory under consideration is actually 
>> inconsistent. However, an antinomy (paradox) should not be expressible at all 
>> (in formal logic and mathematics).
>>
>> If natural deduction is ruled out for the foundation of mathematics, there 
>> would be no other "fix" for Isabelle than switching to an axiomatic 
>> (Hilbert-style) deductive system. Then, if one does not want to downgrade from 
>> higher-order logic (which would artificially limit the expressiveness of the 
>> language) and prefers to remain in the realm of Russell's invention, namely 
>> type theory [cf. Russell, 1908], basically three options would be left:
>> a) Church's Hilbert-style system of 1940 [cf. Church, 1940] (simple type 
>> theory, i.e., without type variables)
>> b) Andrews' logic Q0 [cf. Andrews, 2002, pp. 210-215], an improved formulation 
>> with identity (equality) as the main notion (also simple type theory, i.e., 
>> without type variables), first published in 1963 as a simplification of 
>> Henkin's variant of Church's formulation [cf. Andrews, 1963, pp. 345 f., 350] 
>> and described at
>>    https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-July/msg00147.html
>> c) my own logic R0 [cf. Kubota, 2015a] (polymorphic and dependent type theory, 
>> i.e., with type variables and types depending on arguments due to type 
>> variables bound with lambda), a further development of Q0, described at
>>    https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-July/msg00170.html
>>
>> In his Ph.D. thesis, Andrews developed a transfinite type theory with type 
>> variables Q [cf. Andrews, 1965, pp. 3-8], which is already some sort of 
>> polymorphic type theory. But instead of consequently reducing the variable 
>> binders to a single one as in Q0 (with the abstraction operator and single 
>> variable binder lambda, obtaining definability of the universal, existential 
>> and uniqueness quantifier) at the type level of Q also, which would have built 
>> the foundation of a dependent type theory, the universal quantifier becomes a 
>> true variable binder (an improper symbol) in the case of types (for details 
>> [cf. Kubota, 2015a, p. 31]).
>>
>> Concerning Isabelle, I do not believe that "30 years" and "millions of dollars" 
>> are simply lost. Both Goedel's First Incompleteness Theorem and natural 
>> deduction have been publicly available for about 80 years now, in relation to 
>> which shorter periods and any economic quantities appear relatively small. I 
>> highly value Paulson's presentation of Goedel's First Incompleteness Theorem, 
>> which, unlike all other presentations I studied (Andrews and Rautenberg), does 
>> not use a shortcut for (P), but a proven theorem, leaving only the (rather 
>> methodological) implicit assumptions of
>> - natural deduction (or at least the lack of a clear distinction of the object 
>> language and the meta-language) and
>> - the semantic approach (model theory invented by Tarski),
>> assuming otherwise correctness of the syntactical inference in Isabelle. Maybe 
>> this fully formalized presentation of Goedel's First Incompleteness Theorem by 
>> Professor Paulson as the climax in the development and application of the 
>> Isabelle proof assistant software was even a necessary step for evaluating the 
>> implications (and potential deficits) of natural deduction.
>>
>>
>> Following the arguments given above, the path to the natural and ideal 
>> formulation of formal logic and mathematics is narrowed down to higher-order 
>> logic in the form of a Hilbert-style type theory, unless natural deduction is 
>> reformulated in a way that preserves the distinction of the object language and 
>> the meta-language, and the possibility of offering clarification here seems to 
>> me to be one of the most important tasks at present.
>>
>> In order to do so, I suggest the careful examination of Paulson's presentation 
>> of Goedel's First Incompleteness Theorem under consideration of the critique of 
>> James R. Meyer [cf. Meyer, 2015] and myself both printed [cf. Kubota, 2013; 
>> Kubota, 2015] and online at
>>    https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00018.html
>>
>> This seems even more important, as with the critique of natural deduction or 
>> certain modes of its implementation, all current major theorem provers, such as
>> - Lawrence C. Paulson's Isabelle (proof claimed by Lawrence C. Paulson),
>> - Coq by INRIA et al. (proof claimed by Russell O'Connor), and
>> - Mike Gordon's original HOL system and its variants, including
>> - John Harrison's HOL Light (proof claimed by John Harrison),
>> might be affected, as they all rely - either as the single form of inference or 
>> as one of several components - on natural deduction (considering sequent 
>> calculus as a subcase here, defining natural deduction as opposed to 
>> Hilbert-style systems).
>>
>> The only older theorem prover, in which Goedel's First Incompleteness Theorem 
>> was claimed to be proven,
>> - Robert S. Boyer's and J. Strother Moore's Nqthm (proof claimed by Natarajan 
>> Shankar),
>> obviously allows the instantiation of metatheorems, which might violate the 
>> object language ("Proved meta-theorems can be installed as simplifiers", 
>> http://www-formal.stanford.edu/clt/ARS/Entries/nqthm, December 13, 2015).
>>
>>
>> Kind regards,
>>
>> Ken Kubota
>>
>>
>> References
>>
>> Andrews, Peter B. (1963), "A reduction of the axioms for the theory of 
>> propositional types". In: Fundamenta Mathematicae 52, pp. 345-350.
>>
>> Andrews, Peter B. (1965), A Transfinite Type Theory with Type Variables. 
>> Amsterdam: North-Holland Publishing Company.
>>
>> Andrews, Peter B. (2002), An Introduction to Mathematical Logic and Type 
>> Theory: To Truth Through Proof. Second edition. Dordrecht / Boston / London: 
>> Kluwer Academic Publishers. ISBN 1-4020-0763-9. DOI: 10.1007/978-94-015-9934-4.
>>
>> Church, Alonzo (1940), "A Formulation of the Simple Theory of Types". In: 
>> Journal of Symbolic Logic 5, pp. 56-68.
>>
>> G?del, Kurt (1931), "On formally undecidable propositions of Principia 
>> mathematica and related systems I". In: Heijenoort, Jean van, ed. (1967), From 
>> Frege to G?del: A Source Book in Mathematical Logic, 1879-1931. Cambridge, 
>> Massachusetts: Harvard University Press, pp. 596-616.
>>
>> Kubota, Ken (2013), On Some Doubts Concerning the Formal Correctness of G?del's 
>> Incompleteness Theorem. Berlin: Owl of Minerva Press. ISBN 978-3-943334-04-3. 
>> DOI: 10.4444/100.101. See: http://dx.doi.org/10.4444/100.101
>>
>> Kubota, Ken (2015), G?del Revisited. Some More Doubts Concerning the Formal 
>> Correctness of G?del's Incompleteness Theorem. Berlin: Owl of Minerva Press. 
>> ISBN 978-3-943334-06-7. DOI: 10.4444/100.102. See: 
>> http://dx.doi.org/10.4444/100.102
>>
>> Kubota, Ken (2015a), On the Theory of Mathematical Forms (Draft of May 18, 
>> 2015). Unpublished manuscript. SHA-512: a0dfe205eb1a2cb29efaa579d68fa2e5 
>> 45af74d8cd6c270cf4c95ed1ba6f7944 fdcffaef2e761c8215945a9dcd535a50 
>> 011d8303fd59f2c8a4e6f64125867dc4. DOI: 10.4444/100.10. See: 
>> http://dx.doi.org/10.4444/100.10
>>
>> Meyer, James R. (2015), "A Fundamental Flaw in An Incompleteness Proof by 
>> Stanis?aw ?wierczkowski" (v2 30 Apr 2015). SHA-512: 
>> 380de2a2e4dfa0441d7d3019025a1e49 87fd9282cb309fd7abd7be667879e554 
>> 696f5e265d911799ecf7416b9c0cf3c0 fbdb2a10bab58532f9a117ddee2b3866. Available 
>> online at http://www.jamesrmeyer.com/pdfs/ff_swierczkowski.pdf (November 28, 
>> 2015).
>>
>> Russell, Bertrand (1902), "Letter to Frege". In: Heijenoort, Jean van, ed. 
>> (1967), From Frege to G?del: A Source Book in Mathematical Logic, 1879-1931. 
>> Cambridge, Massachusetts: Harvard University Press, pp. 124 f.
>>
>> Russell, Bertrand (1908), "Mathematical Logic as based on the Theory of Types". 
>> In: American Journal of Mathematics 30, pp. 222-262.
>>
>>
>> Am 09.12.2015 um 19:10 schrieb Gottfried Barrow <igbi at gmx.com>:
>>
>> [...]
>>
>> Really, what would be of more value? A generalized logic that stands the test 
>> of time, such as Isabelle/Pure and Isabelle/HOL, or a very narrow logic, such 
>> as what you advise?
>>
>> How are people supposed to test the generalized logic, over many years, if the 
>> safer route is taken. And an inconsistent logic is not a crisis anyway. An 
>> inconsistent logic that can't be fixed with "don't do that" is a crisis. A loss 
>> of 30 years, that's a crisis.
>>
>> ZFC is an inconsistent logic that was fixed with "don't do that". It's called 
>> the Axiom Schema of Separation. It's ugly, because it demands a new axiom for 
>> every subset of a set. Or is "ugly" just a matter of perception?
>>
>> ____________________
>>
>> Ken Kubota
>> doi: 10.4444/100
>> http://dx.doi.org/10.4444/100
>>
>>
>>
>>
>> End of Cl-isabelle-users Digest, Vol 126, Issue 9
>> *************************************************
> 




This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.