Logical necessity of inconsistency

The logical necessity of inconsistency is the controversial thesis in computer science and mathematical logic that there are logically necessary inconsistenciess. Historically, the thesis came out of the ineradicable inconsistencies among the documentation, use cases, and code of large software systems. These inconsistencies led to the development of Direct Logic , which is a strongly paraconsistent, unstratified, natural deduction logistic system (that omits the rule of ∨-introduction and restricts Reductio ad absurdum). Direct Logic makes use of reflection in order to reason about the relationships among documentation, use cases, and code. Partly in order to avoid the usual paradoxes of reflection (Liar, , ), Direct Logic restricted reflection to Admissible propositions where a proposition Ψ is Admissible in a theory T exactly when ¬Ψ implies in T that ├T ¬Ψ. However, the argument around the is Admissible and the result is a logically necessary inconsistency concerning the provability of the Gödelian proposition.

Strong paraconsistency
Paraconsistency was invented as a principle so that reasoning would be possible using inconsistent theories. Formally the principle is that not every proposition should follow from the mere existence of a single inconsistency. For example, in the empty theory (which has no axioms), by the principle of simple paraconsistency, there is a proposition Ψ such that
:P, ¬P ⊬ Ψ

However, for the purposes of reasoning about large software systems, a stronger principle is needed. The basic idea of Strong Paraconsistency is that no nontrivial inferences should be possible from the mere fact of an inconsistency. For example, the principle of strong paraconsistency requires:
:P, ¬P, Q ⊬ ¬Q because the inconsistency between P and ¬P is not relevant to Q.

Of course, the following trivial inference is possible even with strong paraconsistency:
:P, ¬P ├ (Q ├ ¬P) and so forth.

Reification reflection in Direct Logic
Every proposition Ψ has reification that is given by \lceilΨ\rceil∈Sentences⊆XML. Similarly every s∈Sentences has an anti-reification that is the proposition given by \lfloors\rfloor.

Reification is needed for large software systems so that that documentation, use cases, and code can mutually speak about what has been said and its meaning to enable the following functionality:
*The execution of code can be dynamically checked against its documentation. Also Web Services can be dynamically searched for and invoked on the basis of their documentation.
*Use cases can be inferred by specialization of documentation and from code by automatic test generators and by model checking.
*Code can be generated by inference from documentation and by generalization from use cases.

The big issue for reification in Direct Logic is when “When is Reification Reflection applicable?” where Reification Reflection for Ψ for a theory T is defined as follows:
:Ψ ↔T \lfloor \lceilΨ\rceil \rfloor
::where T is bi-implication in the theory T as defined In Direct Logic.

The naive use of reflection results in paradoxes, e.g., Liar, , and . These paradoxes are barred by the following criteria (called Admissibility) for allowing the use of reflection: Ψ is Admissible for T if and only if
:(¬Ψ) →T ├T ¬Ψ

The motivation for Admissibility builds on the denotational semantics of the Actor model of computation which were first developed in . Subsequently developed the TimedDiagrams model with the Representation Theorem which states:
The denotation DenoteS of an Actor system S represents all the possible behaviors of S as
:DenoteS = ⊔i∈ω ProgressionSi(⊥S)
where ProgressionS is an approximation function that takes a set of approximate behaviors to their next stage and S is the initial behavior of S.

In this context, Ψ is Admissible for some system S means that ¬Ψ implies that there is a counter example to Ψ in DenoteS so that in the denotational theory induced by S:
:(¬Ψ) → ├S ¬Ψ

Logical fixed point theorem
The logical fixed point theorem is also known as the diagonal lemma is a logical variant of the Fixed point combinator.

Theorem
:Let f:Sentences→Sentences
:├T (\lfloorFix(f)\rfloorT \lfloorf(Fix(f))\rfloor)
::where Fix(f) ≡ Θ(Θ)
:::where Θ ≡ λ(g) f(λ(x) g(g(x)))

Theorem: Theories in Direct Logic are incomplete
Theories in Direct Logic are incomplete, i.e., for each theory T there is a propostion which cannot be proved or disproved.

Proof method:
The key to the proof is the definition of a Gödelian paradoxical propositon ParadoxT where

:ParadoxT\lfloorFix(Diagonal)\rfloor (which exists because Diagonal always converges)
::where Diagonal ≡ λ(s) \lceil¬â”œT \lfloors\rfloor \rceil

It is sufficient to prove the following:
#├T ¬â”œT ParadoxT
#├T ¬â”œT ¬ParadoxT

The proof makes use of the Logical Fixed Point Theorem to prove:
:’’Lemma’’ ├T (ParadoxTT ¬â”œT ParadoxT)

Theorem: Theories in Direct Logic are inconsistent
Theories in Direct Logic are inconsistent, i.e., for each theory T there is a proposition which can both be proved and disproved.

Proof method :
It is sufficient to show that T proves both ├T ParadoxT and its negation, i.e.,
#├T ├T ParadoxT
#├T ¬â”œT ParadoxT

Consequently T is inconsistent concerning ├T ParadoxT.

The proof follows from the Incompleteness Theorem and the lemma used in its proof:
:*From the 2nd clause of the Incompleteness Theorem and the lemma used in its proof, it follows that ├T ParadoxT and therefore ├T ├T ParadoxT by the adequacy of T.
:*On the other hand, ├T ¬â”œT ParadoxT is just the 2nd clause of the Incompleteness Theorem.

Consequences of logically necessary inconsistency
But all is not lost because the following can be said about this logically necessary inconsistency:
*Because T is strongly paraconsistent, that T is inconsistent about ├T ParadoxT (by itself) should not affect other reasoning. Also the subject matter of ├T ParadoxT is not of general interest in software engineering and should not affect reasoning about current large software systems. So do software engineers need to care that T is inconsistent about ├T ParadoxT as opposed to all the other inconsistencies of T which they care about more?
*The logically necessary inconsistency concerning ├T ParadoxT is a nice illustration of how inconsistencies often arise in large software systems: “there can be good arguments (proofs) on both sides for contradictory conclusions”.

A big advantage of strongly paraconsistent logic is that it makes fewer mistakes than classical logic when dealing with inconsistent theories. Since software engineers have to deal with theories chock full of inconsistencies, strong paraconsistency should be attractive. However, to make it relevant we need to provide them with tools that are cost effective.

What about truth?
At first, TRUTH may seem like a desirable property for propostions in theories for large software systems. However, because a paraconsistent reflective theory T is necessarily inconsistent about ├T ParadoxT, it is impossible to consistently assign truth values to propostions of T. In particular it is impossible to consistently assign a truth value to the proposition ├T ParadoxT. If the proposition is assigned the value TRUE, then (by the rules for truth values) it must also be assigned FALSE and vice versa. It is not obvious what (if anything) is wrong or how to fix it.

Of course this is contrary to the traditional view of Tarski. E.g.,
:"I believe everybody agrees that one of the reasons which may compel us to reject an empirical theory is the proof of its inconsistency: a theory becomes untenable if we succeeded in deriving from it two contradictory sentences ... It seems to me that the real reason of our attitude is...: We know (if only intuitively) that an inconsistent theory must contain false sentences."

On the other hand, Frege suggested that, in a logically perfect language, the word ‘true’ would not appear! According to McGee , he argued that "when we say that it is true that seawater is salty, we don’t add anything to what we say when we say simply that seawater is salty, so the notion of truth, in spite of being the central notion of logic, is a singularly ineffectual notion. It is surprising that we would have occasion to use such an impotent notion, nevermind that we would regard it as valuable and important."

Relationship to Gödel
Why did Gödel and the logicians who followed him not go in this direction? Solomon Feferman remarked on "the shadow of Hilbert that loomed over Gödel from the beginning to the end of his career." Also Feferman conjectured that "Gödel simply found it galling all through his life that he never received the recognition from Hilbert that he deserved." Furthermore, Feferman maintained that "the challenge remained well into his last decade for Gödel to demonstrate decisively, if possible, why it is necessary to go beyond Hilbert’s finitism in order to prosecute the constructive consistency program." Indeed Gödel saw his task as being "to find a consistency proof for arithmetic based on constructively evident though abstract principles" .

Also Gödel was a committed Platonist, which has an interesting bearing on the issue of the status of reflection. Gödel invented arithmetization to encode abstract mathematical statements as integers. Direct Logic provides a similar way to easily formalize and strongly paraconsistently prove Gödel’s argument (and even an extension due to ). But it is not clear that Direct Logic is fully compatible with Gödel’s Platonism

With an argument just a step away from inconsistency, Gödel (with his abundance of caution ) was not prepared to go in that direction.
 
< Prev   Next >