Logical truths and rational agnostic attitudes

Suppose that T  is taken to be a logical truth—e.g. T has the form φ or not-φ (if you believe that excluded-middle is not a logically valid formula-scheme, take another example). Now consider the following argument:

  1. Rational subjects at least implicitly know/are rationally committed to logical truths;
  2. For any φ, if one implicitly knows/is rationally committed to φ then it is not rational for one to suspend judgment about φ;
  3. Therefore, a rational subject does not suspend judgment about logical truths (1, 2);
  4. Subject S rationally suspends judgment about T;
  5. Therefore, is not a logical truth (3, 4).

If sound, this argument could be used to prove that a number of things we have taken to be logical truths are not after all logical truths. If that is absurd, then one of 1, 2 or 4 must go. Let me say a little bit about each of these

About premise 1: The idea here should be familiar to those who use Kripke-models to formalize epistemic notions: you always know (perhaps implicitly) all propositions that are true in every world that is epistemically accessible to you; since logical truths are true in all of those, all logical truths are known. But in order to accept premise 1 you don’t need to embrace all the idealizations of possible-worlds semantics. There are other ways to motivate 1, however. Here’s one: the way in which subjects cognize commits them to logical truths, because the epistemic goodness of their deductive inferences would seem to depend on those truths, and they are able to recognize this. E.g. when I deduce that ∃x(Fx & Gx) from my belief that (Fa & Ga) I manifest implicit knowledge of/am committed to (Fa & Ga) → ∃x(Fx & Gx) or its necessity thereof. (Implicit knowledge/rational commitment need not be transparent or explicit to the knower herself).

About premise 2: That I implicitly know/am rationally committed to φ in a given cognitive state s means, among other things, that I am prepared to transition from s to a new cognitive state s’ in which I explicitly know/rationally believe that φ. So in the following norm applies to me: I should not disbelieve that φ or suspend judgment about φ—that would be incoherent of me. Since coherence is a requirement for rationality, that would be irrational of me.

About premise 4: What gives support to it is an argument from appearances. In some cases it seems that it is not irrational for one to suspend judgment about a logical truth, say, because a reliable logician has told one that it is controversial whether that is really a logical truth. Or one might suspend judgment about what is (unbeknownst to one) indeed a logical truth (let it be φ) out of one’s own theoretical reflections: one knows that system S validates φ, and that system S’ does not validate φ, but one is not sure which of these systems is the one that correctly captures logical validity/necessity; each of S and S’ has its own advantages (e.g. one of them is stronger than the other, but the latter one fits the claims of quantum mechanics better).

Initially I feel more inclined to reject 4, but that is because I have this thing with the ‘saving–appearances’ approach to philosophy. But I won’t try to defend this option here—just wanted to put the conflict between 1, 2 and 4 out there.

That one does not infer everything from a contradiction

C.I. Lewis wasn’t happy with the material conditional (\phi \rightarrow \psi), presumably because that conditional does not formalize the indicative conditional in quite the expected way. He then resorted to the strict conditional \Box (\phi \rightarrow \psi), thus avoiding many of the ‘counterintuitive’ results involving the material conditional, e.g.: \psi \models (\phi \rightarrow \psi)  but \psi \not\models \Box (\phi \rightarrow \psi), and \neg\phi \models (\phi \rightarrow \psi)  but \neg\phi \not\models \Box (\phi \rightarrow \psi), and also  \neg(\phi \rightarrow \psi) \models \phi  but \neg\Box(\phi \rightarrow \psi) \not\models \phi, among others.

Still, some people will think that the strict conditional does not formalize the indicative conditional in quite the expected way either — and that is not just because there are the so-called ‘paradoxes of strict implication’, i.e.: \Box \psi \models \Box(\phi \rightarrow \psi) and \neg\Diamond \phi \models \Box(\phi \rightarrow \psi), but also because it is still the case that contradictions strictly imply anything, i.e.: \models \Box((\phi \wedge \neg\phi) \rightarrow \psi), for any \phi,\psi. If in addition to that one requires at least reflexivity of the accessibility relation, one then gets by modus ponens (\phi \wedge \neg\phi) \models \psi, or: anything follows from a contradiction. This is sometimes called the ‘principle of explosion’. So the strict conditional does not avoid all the drawbacks of the material conditional after all.

But what is the problem with explosion? Priest (here) claims that there are ‘definite counterexamples’ to that principle. One of them is about Bohr’s theory of the atom. Priest says:

This was internally inconsistent. To determine the behaviour of the atom, Bohr assumed the standard Maxwell electro- magnetic equations. But he also assumed that energy could come only in discrete packets (quanta). These two things are inconsistent (as Bohr knew); yet both were integrally required for the account to work. The account was therefore essentially inconsistent. Yet many of its observable predictions were spectacularly verified. It is clear though that not everything was taken to follow from the account. Bohr did not infer, for example, that electronic orbits are rectangles. (p. 75)

There appears to be a non-trivial assumption on the background of this argument, however, i.e. roughly: that if a certain logical entailment holds, it is rational for us to infer its conclusion on the basis of its premises. Given that Bohr did not infer any old thing from his inconsistent theory of the atom — and it would not be rational for him to do so anyway — explosion is not valid. One is reminded here of Harman’s points (here) against the view that there is a tight connection between logical consequence and correct reasoning, when the latter is understood as reasoned change in view. Harman and Priest agree that sometimes it is rational to be inconsistent. But while Harman infers from this that logic is not specially relevant to reasoning, Priest infers that classical logic is problematic.

Priest’s approach, however, can be seen to be problematic insofar as we generalize it to other cases. We do not conclude, for example, that a logic in which \phi_1,...\phi_n \models (\phi_1\wedge ... \wedge\phi_n) is problematic because a rational subject fails to believe the conjunction of all his beliefs, or even a localized subset of it. Priest would then need to tell us why we cannot do in the conjunction case what he did in the inconsistency case.

But this is not to endorse Harman’s conclusion either. As it has been emphasized by MacFarlane (here), a rational subject may be under conflicting obligations, e.g.: the obligation to avoid inconsistency as opposed to the obligation of paying attention to history of human error. Another alternative is to go probabilistic (as Field does here), and to substitute bridge principles connecting logical entailment to degrees of belief for bridge principles connecting logical entailment to black-and-white beliefs. Neither of these options lead to the conclusion that logic is not relevant to reasoning — and neither of them demands logical revision.

P.s.: It just came to my attention that Florian Steinberger has published a paper on the relationship between the principle of explosion and the normativity of logic here.

Successfully suspending judgment?

Is there an epistemic success term for suspension of judgment? What is it? Compare to belief: that it is true (or that it has a true content) makes it epistemically successful, even though of course that does not imply that it is justified, or warranted, or knowledge. But we don’t want to say that being half true/half false is what makes suspension of judgment successful.

There are other success criteria for belief but that of being true. E.g. beliefs may be reliably formed or supported by the evidence or based on good reasons — perhaps similar things can be said about suspension of judgment.

What does it mean, though, for a state of suspension of judgment to be reliably formed? An idea is that it is so formed when the process that outputs suspension of judgment is such that, if it were to output beliefs instead, half of the time the subject would form a true belief, and half of the time a false one. Or: where p is the content of the agnostic state, the objective probability of p is +-0.5 conditional on the input of the cognitive process, or on the specification of the initial conditions of the cognitive process.

Many things we suspend judgment about, however, may be necessary truths. Perhaps Goldbach’s conjecture is a necessary truth — but I suspend judgment about it because I am untrained in higher mathematics (or perhaps I am, but still). So the success criteria mentioned in the previous paragraph would imply that I am unsuccessful here — and not just because I’m failing to believe something that is true. They would motivate, that is, a certain skepticism about the epistemic status of agnostic attitudes. Perhaps I am unsuccessful in this situation, of course. But still, I do not appear to be malfunctioning at all here.

Similar things might be said about being supported by the evidence. The success criterion here would be: you successfully suspend judgment about p when your total evidence does not give support to p and it does not give support to not-p. So when I suspend judgment about whether Goldbach’s conjecture is true I am being unsuccessful again: if it is true, and necessarily so, it is supported by anything — including my own evidence. Can states of suspension of judgment be based on good reasons, then? If being a good reason is what the subject takes to be a good reason then it would appear that the answer is ‘Yes’. But as soon as we start including objective conditions in our explications ‘is a good reason for’, we’ll probably reach the same destiny.

Further developments might address all these worries and show that for every success term that applies to belief there is an analogous success term that applies to suspension of judgment. I am a bit skeptical about that, however. Perhaps making analogy to belief, or at least to first-order belief, is a deeply misguided move.