Научная статья на тему 'Gnosticism or: how logic fits my mind'

Gnosticism or: how logic fits my mind Текст научной статьи по специальности «Философия, этика, религиоведение»

CC BY
86
23
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
LOGIC / SEMANTICS / COMPOSITIONALITY / INTUITIONISM

Аннотация научной статьи по философии, этике, религиоведению, автор научной работы — Kracht Marcus

In this paper I propose a particular algorithm by means of which humans come to understand the meaning of a logical formula. This algorithm shows why it is that some formulae are intuitively easy to understand while others border on the impossible. It also shows that the natural propositional logic is intuitionistic logic, not classical logic.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Gnosticism or: how logic fits my mind»

Gnosticism or: How Logic Fits My Mind

Marcus Kracht

abstract. In this paper I propose a particular algorithm by means of which humans come to understand the meaning of a logical formula. This algorithm shows why it is that some formulae are intuitively easy to understand while others border on the impossible. It also shows that the natural propositional logic is intuitionistic logic, not classical logic.

Keywords: logic, semantics, compositionality, intuitionism

1 Introduction

In this essay I wish to discuss the following question:

How is it that we humans come to understand what is being said?

At first blush this might seem to be just a linguistic question. And it also seems that the answer is really simple and has been given by Montague and might run as follows:

Theoretically, all we need is the meanings of the elementary constituents (say, words) and the grammatical rules, and then we just calculate the meanings of the sentences according to these rules.

Even though semantics is a domain of linguistics, the basic principles according to which these calculations can (and perhaps do) proceed, are however of strictly logical character: first-order predicate logic, A-calculus or the like. So, in effect, linguistics borrowed heavily from logic. And we expect therefore that a logician has a story to tell on how a complex expression is or should be evaluated.

And he does. Montague, along with many others dealing with this problem, was a logician. And he believed that formal semantics can essentially be seen as a branch of logic. I basically agree. Yet, there are some problems that need to be addressed.

1. The standard story is not realistic. Nobody calculates e. g. intersections of infinite sets. Thus, the meanings are not the sort of things we can successfully manipulate. There must be something else that we use instead.

2. There is evidence that not all formulae are born alike. Some are easy to understand others next to impossible. This difference cannot be explained with the standard algorithms.

3. Actual semantic calculations do not stop because some word is undefined. In the standard picture, however, this should be the case.

4. Formal semantics does not provide correlates for topic and focus. Yet there is evidence that they interact with the logical structure. No account is given of that fact.

And so we are left with the feeling that there is much more to be said about the idea of calculating meanings. For a logician it must seem that there is an obvious solution: try proof theory. And indeed, attempts have been made to base semantics on proof theory rather than model theory. This idea goes back at least to Dummett ([2], for a recent assessment see [6])1. My own thinking has been centered around the idea that some sort of proof theoretic calculus might be the answer. Yet proposals I have seen in this direction are insufficient; they do not stress enough the dynamic nature of proofs. Indeed, Dummett thinks that understanding means knowing what constitutes a proof for the proposition; this is static thinking. But, while a proof theorist might ultimately be disinterested in how we get at proofs, for the questions raised above this is a vital problem to address. We could dynamify Dummetts theory as follows: understanding is the process of running through some proof of the proposition and validating it. But the exercise is sterile2. Exchanging one piece of notation for another will not do. In Dummetts case I still wonder how the basic problem is solved by using mathematical

1Also [3] contains a recent linguistics proposal. See also [8] and references therein.

2In my view a somewhat better view would be to be able to come up with a "proof' oneself. This latter notion however seems to be too strong on many occasions.

proofs. For certain things are provable but still seem to be beyond comprehension. What I am looking for is understanding the process of understanding or, as I have come to call the theory of this process, gnosticism. Gnosticism argues that understanding is inherently similar to proof theory in that it must break the formulae in a certain way to see "what is in them". Though this bears resemblance to the Dummett-Prawitz theory of meaning — not the least because it gives us intuitionistic logic (IL) in the propositional case, — it turns out to be essentially weaker than IL in the predicate logic case, since the format for doing proofs is quite dissimilar. I shall not discuss predicate logic here, however.

What I am going to describe here is a calculus which I formulated in [5]. In it I break with the commonplace idea that there is a neat division between syntactic expressions and meanings and that computations of meaning can be delayed until the full structure of the expression has been settled.

2 The Standard View

Here is how one standardly proceeds to define syntax and semantics for propositional logic. The alphabet is

(1) A := {p, 0, ••• , 9, A, - V, (,)}

Well-formed formulae (wffs) are defined inductively:

1. p is a wff.

2. If e is a wff, so are e^0, e~i, • • •, e^9.

3. If e and e' are wff, so are (^-Te^). Ce^A^e'^), e"V"e'") and Ce^^e'^).

The wffs of the form pu, where u is a string of digits, are called variables. The meaning is given either in terms of truth or in terms of assignments. Truth is truth-under-an-assignment, and so a truth-based account starts with a valuation, ff, which is a function giving each variable a value from {0,1} and then proceeding as usual bottom up. The meaning based approach needs no such function. Let Ass denote the set of all asignments and f € Ass:

1. If v is a variable, [v] := {f : ff (v) = 1}.

2. [("-"e")] = Ass - M.

3. [Ce"N"e'")] = [e] n [e'].

4. [("e"V"e'")] = [e] U [e'].

5. [("e"^"e;")] = (Ass - [e]) U [e']. 3 Gnosticism

In gnosticism, none of the above clauses are considered satisfactory. Recall that we are not after an objective (i. e. model theoretic) meaning (for which the above may in fact suffice) but after a procedure that humans may be said to apply. Though I do not make any claims here about whether humans actually do reason in this way, nevertheless I claim that the algorithm developed below has essential chracteristics of the one that is being used3. This cannot be said of the above definitions. The meaning based definition uses infinite sets and so is out of the question. These sets are not what we actually compute. The truth based account however presents too little. It does not give justice to the idea that the variables really are indefinite in truth. (It must be admitted, though, that in natural language we hardly face variables of this kind. Sentences of ordinary language must be considered constants, rather.)

So, if none of these are good enough, what might be a solution? The proposed solution is in a nutshell an account based on internal semiosis. The semiosis can be reduced in this context to an algorithm that uses the definitions to unpack the meanings and wrap them back up again. It is best to see an example. The arrow "is commonly explained (informally) as follows:

Faced with the problem of judging the truth of " (^x)", first you assume the premiss "then you check whether the conclusion "x" holds. If so, the " (^x)" is judged true, otherwise not.

This is basically Ramsey's idea. To execute this mentally, we need to be able to perform a number of actions: (1) assume something is

3I am even not sure that there is a single universal algorithm. But then again there ought to be some sort of characteristics for the human thinking process, and this is what I am looking for.

the case, (2) reason with assumptions as if they were true, and (3) retract assumptions on need.

This is therefore the core of the idea: I propose a certain array of mental operations which I claim are fundamental to human reasoning. Logical connectives are defined by appealing to these fundamental operations rather than by pointing to some objective reality. It then turns out that the semantic correlate of logical connectives seen in these terms is not a set of assignments but rather a certain mental procedure. Although the procedure in itself may correlate with some reality (conjunction = copresence of facts), this is mostly not the way we come to understand them4.

A key ingredient from our point of view is the judgement. This can be both a psychological act and a simple (eternal) fact. The latter use is nowadays more widespread in logic. We write "h to state that ^ is (universally) true. This means that ^ is a theorem of the logic. On the other hand, seen as an act we may write (as Frege in fact did) "h to state that an act of judging ^ true occurs (see [9]). The notation leaves a lot implicit that could be added, for example the subject and the time point. Thus we could write "hj, 12:00 to state that John judged ^ true at noon. It is then clear that two successive judgement acts may yield different results, while a formula either is or isn't a theorem, something that cannot change over time. For example, "This door is closed" may change in truth value, so it may be judged true at some point and false at another. It does seem to me, though, that this notation can only be used from the outside as a description of what is going on. From the inside (which is the perspective I am adopting here) no such discrimination of time and author is necessary or useful. When I make a judgement, neither do I keep track of time nor do I ask who is judging. It is only when adding this fact to the memory of past actions that I must start to label it with some time. Also, when someone else makes a claim then all I can appreciate is that he expresses a judgement. However, that in itself is not the judgement that is being expressed. It is my judgement of there being a fact

4Recall that this is about the computation of meaning. For its justification I still believe that intuitionism or its ilk are rather ill prepared. I do think that we humans entertain meaning realism deep down ([1]) and this serves to justify our thinking. But when it comes to reasoning it is hard to imagine anything but an intuitionistic conception of meanings.

that some judgement is expressed. Only this will enter my mind.

In addition to judging a formula true we can judge it false, possible, true in the future, almost true, necessary, desirable, and so on. There is no limit on the distinctions that can be drawn here, or the modes of judgement. Judgements are specific noetic acts, or acts of thought. Another such act is assuming. I write "^ to say that ^ is being assumed. Notice that assumptions are not judgements. Judgements assert attitudes we already have towards propositions. These attitudes are mostly stable, while assumptions are of temporary nature. We are free to assume any proposition at any moment and also to retract this assumption. Also, as explained in [5] the attitude expressed in the judgement ("holding true" if judging something true) is not seen as a relation between the individual and the propositions at that moment but rather between the individual and a formulae, though the intention is that the formulae represent the propositions they ordinarily stand for. For example, while " (^N^)" expresses the same proposition (logically speaking) as "the act of judging " (^N^)" true is a different judgement than that of juding "true, as the objects of judgement differ. The reason for that should be clear: it is not always trivial to see that different formulae are saying the same (it may even be undecidable). This is the reason I use quotation marks. They should remind us that formulae are mostly seen as material strings (for example, when presented for judgement) and should at that point not be confused with their meaning.

The mind possesses the ability to judge propositions according to certain modes. Thus, presented with a mode, say "h" and some proposition "it will come up with a verdict. (I ignore the case where no verdict is reached.) The verdict is either that the formula holds or that it is not said to hold. In the first case the judgement is performed: the act "h occurred. In the second case, however, all we can say is that the act "h did not occur. Technically, no judgement has been issued. The proposed judgement failed. There is often a confusion between "F (that is, "is not judged true) and "H (that is, "is judged false). As if to say that the judges who acquitted someone thought he was innocent. In fact, when they acquit him they only said he was not found guilty of the charge. Thus they might simply fail to have enough evidence to sentence

him. So they did not say that the person was innocent. Also, they might have found him guilty of some other charge had he been accused of that instead. Likewise, refusing to call a book "good" is not actually saying that it is bad (again, we might lack the evidence), even though such a conclusion is not unreasonable.

4 Gnostic Calculus

I shall now present a sketch of the gnostic calculus. In it, we distinguish between judgement dispositions and judgements. A judgement disposition is a triple A v where A is a set of formulae, v a mode and ^ a formula. The disposition is said to be unconditional if A = 0, and conditional otherwise. A theory is a set of judgement dispositions. A marked proposition is either of the form ^ or of the form where ^ is a formula. A slate is a sequence of (marked) propositions. A mental state is a triple

(2) {T,S,A)

where T is a theory, S a slate, and A maybe either empty or contain a single judgement. T consists of all the judgement dispositions that an individual has. It may contain, for example, the disposition to hold true-in-the-future "the summer harvest is good" on condition that is true "it rains in May". Crucially, these dispositions need not be logical laws. A is the attention cell. It is the one spot where the mind actually performs judgements. The slate contains a record of past judgements so as to make them available for reasoning.

It is perhaps best to see an example. We start with the null state a0 := {0,0, 0). In particular, T = 0. There is a general rule that says we may assume at any moment any proposition. So, we proceed from this initial state to

(3) {0, 0, ^

The fact that this assumption has been made can be recorded in the slate as follows:

(4) {0, 0)

The little corner "r" reminds us of the fact that the proposition ^ has been assumed.

There is next a general rule that states that anything assumed is also true.

(5) (0, >>, h to>

And, finally, a general rule allows to abstract an assumption by introducing an arrow:

(6) (0, 0, h (to^to)>

An empty slate means furthermore that the formula judged true is also unconditionally true. Notice that the computation ends in a judgement of a complex formula. There is no way the calculus could begin with such a judgement (unless T is not empty). What this says is that judging " (to^to)" true is not direct; it is indirect or mediated. Reasoning is essential in getting there. The longer it takes the more difficult it is.

The demonstration may have inolved T, but in our case it is clear that T plays no role. We have established that (to^to) is universally (i. e. logically) true. I would like to caution the reader a little, though. Another way to read this is actually as follows: the arrow encodes the fact that the truth of the premiss was needed to establish the truth of the consequent. From the perspective of gnosticism, the arrow is there to allow us to encode a complex judgement pattern into a single formula.

Here is another sequence (items are concatenated into lists using the semicolon ";").

(0, 0, ^ to> (0, to, 0> (0, to, ^ x> (7) (0, x, 0> (0, x, h (0, to h (x»> (0,0,h (to^(x^to))}

The full set of rules is this.

Assumption From (T, S, 0> pass to (T, S, ^ to>.

^-Conversion From (T,S, ^ to> pass to (T,S; rto, 0>.

h-Conversion From (T, S, h to> pass to (T, S; to, 0>.

^-Activation From (T, S, A> pass to (T, S, h to> provided S contains rto.

h-Activation From {T, S, A) pass to {T, S, h provided S contains

Reflection From {T,S; h x) pass to {T,S, h (^x)).

Forgetting From {T, S; fo S', A) pass to {T, S; S', A).

Firing From {T,S, h pass to {T,S, h x), provided S contains

(^x).

These rules are all optional. "From a pass to f" is thus to be interpreted as an option to change one's state from a to ff.

Notice that while we may forget facts, it is not allowed to forget assumptions. The following is a weaker formulation of the rule.

From {T, S; A) pass to {T, S, A).

Also, we may in all cases require A to be empty before we make the next step. We can also replace "S contains by "S contains ^ or None of these changes affect the power of the calculus except that derivations may have different overall length.

If you want to understand the role of T here is a general reasoning rule for it.

Phatic Enaction From {T, S, A) pass to {T, S, >6) provided that there is a set A such that

1. A v 6 € T, and

2. all members of A occur in S.

From this calculus we may derive the following logical theory. We write "A IK if the state {0, A, h can be reached from a0. In particular, "I if the state {0,0, h can be reached from a0. Notice the following general monotonicity property.

THEOREM 1. If {T, S, A) can be reached from {T,S',A') then {T', S, A) can be reached from {T', S', A'), provided that T C T'.

The proof is easy. It is due to the fact that the rules do not change T. (Throughout this paper we will not deal with rules that change T, although the possibility is an interesting one.)

THEOREM 2. The consequence relation "IK" coincides with intui-tionistic derivability for '

So we get intuitionistic logic, though only for the language in "

5 Why Do We Not Get Classical Logic?

The reason we do not get classical logic is because in classical logic (to^x) does not actually mean that we perform a forward reasoning. The parts of the formula are not ordered temporally, only linearly. The basic ingredient to the gnostic calculus is however forward reasoning coupled with a massive use of the deduction theorem (= Reflection). For example, Frege's formula

(8) ((to^(x^v))^((to^x)^(to^v)))

is quite perspicuous when reduced via (DT).

(9) (to^(x^v)); (to^x); to h y

Thus it can be proved in the following way: make the assumptions in the order shown, and derive y. Use Reflection three times. On the other hand, (DT) is powerless in view of Peirce's formula

(10) (((to^x)»»

The best we can do is

(11) ((to^x)» h to

But that does not help very much. Thus, as the calculus is decidedly based on forward reasoning using (DT), it shows that Peirce's formula tells us something quite different about

There are now different takes on the story. One is to say that the inner engine of the mind is too weak to grasp Peirce's formula. The other, which I will adopt here, is that the definition of ^ used above does not cover the meaning of ^ in classical logic. Indeed, one will notice that one thing cannot be derived in this system: the two-valuedness. There is basically only one mode of judgement, acceptance. If we want to introduce classical logic, we have several options.

1. We take it on faith: we add Peirce's formula as an axiom (which still means it is opaque). Thus, T contains the formula " (((p0^p1)^p0)^p0)". This requires the use of either variables or schematic letters, see below on that.

2. We add another mode, falsity (H), and a principle that declares that any formula must be accepted or rejected. However, I argue against disjunction at the level of modes, so that option is out.

3. We add more connectives and get classical logic by introducing, for example, the law of the excluded middle. However, this too requires some faith. Even though (p0V(-p0)) is at first blush more perspicuous, it still cannot be fully resolved in the gnostic calculus.

Below I shall turn to other connectives. This will provide yet another oportunity to obtain classical logic "through the back door".

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

6 On Dispositions

The meaning of " A v to" is as follows:

A^to € T iff the subject has the disposition to immediately accept the judgement" vto" provided that he also immediately accepts every member of A as true.

The special case A = 0 deserves mentioning.

vto € T iff the subject has the disposition to immediately accept the judgement " vto".

This means the following. If the subject were to put the formula " to" before his mind and consider it in the mode v he would accept the judgement. Moreover, the acceptance is immediate, it proceeds without further mediation. This distinguishes members of T (direct dispositions) from others (called mediated or indirect dispositions). For example, althouth a subject will respond eventually with acceptance upon presented with the formula " (to^to)" even if T does not contain it, this response is not immediate. For we need several steps to get there.

Thus action occurs only when the mind (or someone else) actually poses the judgement, whether "to" has (or is) v.

It may be thought that it is enough to just ponder the formula, and the mode will appear right away. That is, if I ponder the formula "that Caesar has won the battle of Waterloo", I will immediately reject it. That is, my mind prompts the mode of rejection ("H"),

which it finds appropriate for that sentence. There is no denying that such a habit of choosing the best fit mode exists, but I am not sure that it always yields an answer (I may simply be in the dark of what to think about certain things), or that the answer is even unique. I may come to conclude, for example, on pondering "2 + 2 = 4" that is true, and I will not think that it also will be true. So if the mode is "true-in-the-future", it will not occur to me that this formula is also accepted under that mode. But when prompted with the mode "true-in-the-future" I might also accept that latter judgement.

7 Other Connectives

Let us see how we can add connectives to the language. The easiest is conjunction. Consider the following rules.

Left Elimination Pass to {T,S, h provided (^Ax) is in S.

Right Elimination Pass to {T,S, h x), provided (^Ax) is in S.

Introduction Pass to {T,S, h (^Ax)), provided that ^ and x are contained in S.

It is easy to verify that we can derive the axioms for conjunction of Int.

When we look at disjunction, matters are a bit more complex.

Left Introduction Pass to {T,S, h (^Vx)), provided ^ is in S.

Right Introduction Pass to {T,S, h (^Vx)), provided x is in S.

Elimination Pass to {T,S, h y), provided that (^y), (x^V) and (^Vx) are contained in S.

This allows to deduce the commonly known axioms of Int.

With respect to negation we may either introduce the connective itself, or introduce 0 and define as (^0). I choose the latter path. Thus, we need rules for 0. Here is the only one:

Ex falso quodlibet Pass from {T, S, A) to {T, S, A ) provided that 0 is in S.

With this we can derive the standard axioms as follows.

(12) ((-to)^(to^x))

is the same as

(13) ((to^Q)^(to^x))

0,0)

0, ^ (to^Q))

(14) (0

,r(to^Q) 0)

,r(to^Q) ^ to)

,r(to^Q) rto, 0)

,r(to^Q) rto, h Q)

,r(to^Q) rto; h Q, 0)

,r(to^Q) rto; h Q, h x)

,r(to^Q) rto, h x)

, r(to^Q), h (to^x))

,0, h ((to^Q)^(to^x)))

The second formula to be derived is this:

(15) ((to^x)^((to^(-x))^(-to)))

or, after rewriting

(16) ((to^x)^((to^(x^Q))^(to^Q)))

This is Frege's formula (after permuting the first two premises):

(17) ((to^(x^Q))^((to^x)^(to^Q)))

So, we conclude that the calculus can easily be upgraded to Int. However, once we have the new connectives we may also get classical logic, if we simply add the rule that allows to judge "(toV(-to))" true.

8 Variables

I shall briefly comment on the use of variables. In the presentation I made sure to use schematic letters. These are not variables of the language but rather letters standings in for formulae. The language itself may choose to have variables or not. In general, natural languages do not have so much in terms of variables. Base propositions are definite: "The door is closed", "I like cheese", "France is in Europe",

and so on. All these sentences have truth values that we cannot assign freely. Indeed, in Greek philosophy, modes of judgement were roughly given like this:

If the first then the second. The first. Therefore the

second5.

Here, the phrases "the first", "the second" are actual variables. However, are they variables in the sense of logic or are they schematic? In other words, do they quantify over propositions or do they quantify over sentences? In natural language it seems that they do the latter; in logic the answer is not so easy. I have made a point [4] that consequence relations should be understood with variables ranging over sentences. Yet for truth conditions it seems that we still want to opt for propositions, that is, definable sets of possible worlds.

The difference may be deemed marginal but they generally call for a different formulation of the rules. Schematic letters exclude a rule of substitution. It makes no sense to substitute for expressions. Consider again Peirce's formula. Suppose we want to take this formula as an axiom. The way to do this is to add the formula " (((p0^ p1)^p0)^p0)" to T. If we do so, we have to use letters for variables. These are not schematic, as they really have to be encoded in some formula. Or else we would have to provide for a schematic rule that allows to judge any instance of the formula true. The latter would look like this:

Peirce-Introduction Pass from {T, S, A) to {T, S, (((c^x)^C)^C)).

When we do this, however, either method we use means that we take Peirce's formula on faith. It cannot be derived, it is simply stipulated. So how can it enter the theory T? How could Charles Peirce ever have come to the conclusion that it is to be judged true? My answer is twofold. On the one hand there is a gap between the objective truth and the truth that we can assess. There are propositions whose truth we do not know and yet we assert that they are either true or false. If we build that into the definition of our connectives, we end up with standard truth tables. This route

5This is due to Chrysippos. In fact, he used Greek ordinals (a' for "the first", and for "the second") but the essence is the same.

however needs to be couched in terms of this calculus, something which I must leave for another occasion. It would however provide the only avenue for justifying classical logic. On the other hand, there is another process going on, a process which is capable of changing T. I call this process learning. The more evidence we have to some formula the more we are inclined to judge it true. This of course is not necessarily because we have made any progress in understanding its content. We may simply have been told that it is true (by some authority, say a teacher or a textbook). Whatever the reasons are, they are different from the process of gnostic understanding, though it is not excluded to add formulae that are provable (like, for example, " (to^to)").

9 Conversions and Meaning

Not unlike proof theory, the meaning of expressions is covered by the rules that govern the calculus. The rules for "A", for example, exactly regulate when " (toAx)" is judged true. It is judged true if (and only if) both "to" and "x" are judged true. For classical logic (and a host of others) this is actually sufficient.

We can now do semantics in the following way. Suppose I possess a concept "cat". Write "cat'" for this concept. Then I can understand the word "cat" by connecting it with this concept as follows.

Elimination From (T,S, h cat(x)) pass to (T,S, h cat'(x))

Introduction From (T,S, h cat'(x)) pass to (T,S, h cat(x))

I call these types of rules conversion rules. Notice that I use English words in the same way as mental concepts. This is done on purpose. There is no way we can suppose that speakers have formed a particular concept for every existing word of English. They may know the words but otherwise be clueless as to what they mean. This is in part Putnam's problem of meaning [7], though he frames it as a problem of having the right meaning. Here I wish to say that it may actually be of not much help to suggest that the counterpart of "elm" is a concept of elmhood which I possess. Rather, I may in fact have no such concept. It does not, however, stop me from processing these words and reason with the word. For it may well be the case that I know a few things about elms that I can use

successfully without thinking that I know exactly what they are6. Thus, the mental calculus shold be able to use words even if no concept exists to back them.

Given that, we can also have rules of this form:

Elimination From {T,S, h chat(^)) pass to {T,S, h cat(^))

Introduction From {T, S, h cat(^)) pass to {T, S, h chat(^))

These rules effectively determine the meaning of (French) "chat" in terms of (English) "cat" rather than translating it directly into the concept.

Interestingly, when a word is ambiguous it allows for two pairs of conversion rules. This may actually create dangerous situations. For example, a "bank" may be a financial institution and a place to sit on. Having both rules means that if something is called a bank it may at the same time be tought a financial institution and a place to sit on, because we may apply both rules of conversion for "bank". The way to prevent this, though, is not to exclude these conversion rules but rather to make them destroy the input before moving on (as is done here). Note that factual usage by humans suggests that they fix an understanding of the words and do not easily allow mixing.

Conversion rules are very important for our capability to do reasoning. Suppose we have judged a formula false. How can this affect our behaviour? Obviously, after we have made the judgement it is lost as soon as a new judgement is being made. However, it is often the case that we want to remember temporarily that we have performed that judgement. Then we would like to put it on the slate. This requires however that we put it down in form of a fact. The way to do this is to allow to convert " H C" into " h (-C)". The formula stored on the slate is thus " (-C)". Note that the intermediate judgement "h (-C)" may or may not occur, depending on the exact formulation of the conversion. What is important is that the negation sign is a way to reduce the judgement of falsity into a judgement of truth of some formula. Likewise, when we introduce

6I note in passing that the calculus does not allow to specify whether or not knowledge of meaning is complete or not. Whether or not that is good, remains to be seen.

appropriate connectives we can convert other judgements into judgements of truth.

10 General Significance

The mental model presented here needs to be connected with the intended mode. When we judge "to" true we intend to say that to is actually true. Likewise, when we reject "to" we do want to say that it is false. We are, at least in certain circumstances, well aware of our incomplete knowledge. Then we will not say, for example, that not judging "true actually means that it is false7. Thus, I am endorsing alethic realism, though with a somewhat odd twist. For I am claiming that while we may entertain a belief in classical logic with respect to the objective truth conditions of an expression, we may still find them useless in our own subjective assessment of its meaning. This has uneasy philosophical consequences which I have not been able to resolve. One is a certain schizophrenic attitude towards logical connectives like implication, for which the objective and the subjective meanings diverge 8. The happy coincidence, though, is that the subjective calculus is at least correct and so whatever is subjectively true is also objectively true.

Now, I have said that the modes of "truth" and "falsity" are independent. This raises the question whether there is a guarantee that we can exclude contradictory judgements? In my view the answer is "no"; furthemore, there should not be such a guarantee. First, it is well known that there are propositions that are neither true nor false (due to presupposition failure, for example). Also, there are sentences which we may judge true and false. What changes is the viewpoint under which we call them true or false. This may be perplexing because one is used to the idea that judgements are made of propositions, and they should have a definite meaning. But here judgements are judgements of sentences; the fact that sentences may be ambiguous leads straightforwardly to four valued logic. Vagueness does, too. It is another question as to how we can get rid of it. But if the language is used to express thoughts, then the

7Furthermore, there are cases in which we miscalculate. We judge true where we should have refrained, and conversely. These things happen, but are not the concern of this paper.

8 Precisely what is observed in reality. The literature on how to interpret implication could not be more diverse.

ambiguity inherent in the expressions is a factor not to be ignored.

To give a logical example, consider the language above but with all brackets erased. The formula "CAxVy" may under certain circumstances be both true and false. Suppose namely that C is false, x is true and y is true. Then the formula may be judged true because y is judged true (Right Introduction for disjunction). Or it may be be judged false since C is judged false (after formulation of an appropriate rule for the rejection of a conjunction).

This is a problem that creates much confusion. It is true that the same fact cannot both be the case and not be the case. But since a sentence may be made true by many facts, and also be made true in different ways, it is not clear that we cannot both accept and reject a sentence. For example, for a book to be "red" we may decide that what it needs to have is a red cover. However, if the cover also has pictures in it, it is not clear what exactly makes a cover "red" and how much the presence of other colours disturb its redness. We may invent a list of criteria, but they all seem to be post hoc. We formulate a judgement but it all seems to depend in subtle ways on the context how the judgement comes out eventually. In my opinion it is wrong to say that all this is encoded in the notion of being red (or of being a red cover). There may be no a priori fixed way of applying the meaning of such a simple concept as "red" in various circumstances, and consequently no fixed answer to the question whether a given object falls under the concept in these circumstances.

If this is so, the significance of the calculus is far wider. It shows us that the mental calculus must be partly removed from reality for the reason that the objects that are being manipulated lack a clearly defined semantics to begin with. It seems that the semantics is often discovered after the judgement. It may disappoint philosophers, but in my view it is a fact of life. We ought to learn to live with it.

References

[1] William P. Alston. A Realist Conception of Truth. Cornell University Press, Ithaca and London, 1996.

[2] Michael Dummett. What is a theory of meaning? (ii). In G. Evans and J. McDowell, editors, Truth and Meaning. Oxford University Press, Oxford, 1976.

[3] Nissim Francez and Roy Dyckhoff. Proof Theoretic Semantics for a Natural Language Fragment. Linguistics and Philosophy, 33:447—477, 2010.

[4] Marcus Kracht. Judgement and Consequence Relations. Journal of Applied Nonclassical Logic, 20:223 - 235, 2010.

[5] Marcus Kracht. Gnosis. Journal of Philosophical Logic, 40:397 - 420, 2011.

[6] Peter Pagin. Compositionality, understanding and proofs. Mind, 118:713-737, 2010.

[7] Hilary Putnam. Representation and Reality. MIT Press, Cambridge, Mass., 1988.

[8] Aarne Ranta. Type-Theoretical Grammar. Oxford University Press, 1994.

[9] Nicholas J. J. Smith. Frege's Judgement Stroke and the Conception of Logic as the Study of Inference not Consequence. Philosophy Compass, 2009.

i Надоели баннеры? Вы всегда можете отключить рекламу.