Научная статья на тему 'Arithmetics based on computability logic'

Arithmetics based on computability logic Текст научной статьи по специальности «Математика»

CC BY
79
28
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Логические исследования
ВАК
zbMATH
Область наук
Ключевые слова
COMPUTABILITY LOGIC / PEANO ARITHMETIC / GAME SEMANTICS / CONSTRUCTIVE LOGIC / INTUITIONISTIC LOGIC / LINEAR LOGIC / INTERACTIVE COMPUTABILITY / ЛОГИКА ВЫЧИСЛИМОСТИ / АРИФМЕТИКА ПЕАНО / ИГРОВАЯ СЕМАНТИКА / КОНСТРУКТИВНАЯ ЛОГИКА / ИНТУИЦИОНИСТСКАЯ ЛОГИКА / ЛИНЕЙНАЯ ЛОГИКА / ИНТЕРАКТИВНАЯ ВЫЧИСЛИМОСТЬ

Аннотация научной статьи по математике, автор научной работы — Japaridze G.

This paper is a brief survey of number theories based on em computability logic (CoL) a game-semantically conceived logic of computational tasks of resources. Such theories, termed em clarithmetics, are conservative extensions of first-order Peano arithmetic. The first section of the paper lays out the conceptual basis of CoL and describes the relevant fragment of its formal language, with so called parallel connectives, choice connectives and quantifiers, and blind quantifiers. Both syntactically and semantically, this is a conservative generalization of the language of classical logic. Clarithmetics, based on the corresponding fragment of CoL in the same sense as Peano arithmetic is based on classical logic, are discussed in the second section. The axioms and inference rules of the system of clarithmetic named CLA11 are presented, and the main results on this system are stated: constructive soundness, extensional completeness, and intensional completeness. In the final section two potential applications of clarithmetics are addressed: clarithmetics as declarative programming languages in an extreme sense, and as tools for separating computational complexity classes. When clarithmetics or similar CoL-based theories are viewed as programming languages, programming reduces to proof-search, as programs can be mechanically extracted from proofs; such programs also serves as their own formal verifications, thus fully neutralizing the notorious (and generally undecidable) program verification problem. The second application reduces the problem of separating various computational complexity classes to separating the corresponding versions of clarithmetic, the potential benefits of which stem from the belief that separating theories should generally be easier than separating complexity classes directly.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Арифметика на основе логики вычислимости

Статья представляет собой краткий обзор теорий чисел, основанных на логике вычислимости (CoL, computability logic), имеющей игровую семантику в терминах вычислительных задач и ресурсов. Такие теории, называемые кларифметиками (clarithmetics), являются консервативными расширениями арифметики Пеано первого порядка. В первом разделе статьи излагаются концептуальные основания CoL и описывается соответствующий фрагмент ее формального языка с так называемыми параллельными связками, связками и кванторами выбора (choice), а также слепыми (blind) кванторами. Как синтаксически, так и семантически это консервативное обобщение языка классической логики. Кларифметика, основанная на соответствующем фрагменте CoL в том же смысле, что и арифметика Пеано основана на классической логике, рассматривается во втором разделе. В нем представлены аксиомы и правила вывода системы кларифметики, названной CLA11, а также изложены основные результаты, относящиеся к свойствам этой системы: конструктивная корректность, экстенсиональная и интенсиональная полнота. В заключительном разделе рассматриваются два потенциальных приложения кларифметики: кларифметика как язык декларативного программирования и как инструмент для разделения классов вычислительной сложности. Если кларифметику или подобные теории на основе CoL рассматривать как языки программирования, оно сводится к поиску доказательств, поскольку программы могут быть механически извлечены из доказательств; такие программы также служат своей собственной верификации, тем самым полностью нейтрализуя пресловутую (и неразрешимую в общем случае) проблему верификации. Второе приложение сводит проблему разделения различных классов вычислительной сложности к разделению соответствующих версий кларифметики, потенциальная польза от этого вытекает из убеждения, что разделение теорий в целом должно быть проще, чем прямое разделение классов сложности.

Текст научной работы на тему «Arithmetics based on computability logic»

Логические исследования 2019. Т. 25. № 2. С. 61-74 УДК 510.64

Logical Investigations 2019, Vol. 25, No. 2, pp. 61-74 DOI: 10.21146/2074-1472-2019-25-2-61-74

ClORGI JAPARIDZE

Arithmetics based on computability logic

Giorgi Japaridze

Villanova University,

800 Lancaster Avenue, Villanova, PA 19085, USA. Institute of Philosophy of RAS,

12/1 Goncharnaya Str., Moscow, 109240, Russian Federation. E-mail: giorgi.japaridze@villanova.edu

Abstract: This paper is a brief survey of number theories based on computability logic (CoL) — a game-semantically conceived logic of computational tasks of resources. Such theories, termed clarithmetics, are conservative extensions of first-order Peano arithmetic.

The first section of the paper lays out the conceptual basis of CoL and describes the relevant fragment of its formal language, with so called parallel connectives, choice connectives and quantifiers, and blind quantifiers. Both syntactically and semantically, this is a conservative generalization of the language of classical logic.

Clarithmetics, based on the corresponding fragment of CoL in the same sense as Peano arithmetic is based on classical logic, are discussed in the second section. The axioms and inference rules of the system of clarithmetic named CLA11 are presented, and the main results on this system are stated: constructive soundness, extensional completeness, and intensional completeness.

In the final section two potential applications of clarithmetics are addressed: clarithmetics as declarative programming languages in an extreme sense, and as tools for separating computational complexity classes. When clarithmetics or similar CoL-based theories are viewed as programming languages, programming reduces to proof-search, as programs can be mechanically extracted from proofs; such programs also serves as their own formal verifications, thus fully neutralizing the notorious (and generally undecidable) program verification problem. The second application reduces the problem of separating various computational complexity classes to separating the corresponding versions of clarithmetic, the potential benefits of which stem from the belief that separating theories should generally be easier than separating complexity classes directly.

Keywords: Computability logic, Peano arithmetic, game semantics, constructive logic, in-tuitionistic logic, linear logic, interactive computability

For citation: Japaridze G. "Arithmetics based on computability logic", Logicheskie Issledo-vaniya / Logical Investigations, 2019, Vol. 25, No. 2, pp. 61-74. DOI: 10.21146/2074-14722019-25-2-61-74

© Japaridze G.

1. Computability logic (CoL): a formal theory of computability in the same sense as classical logic is a formal theory of truth

This talk is about number theories based on computability logic. I will be using the abbreviation CoL for the latter. So, first of all, what is computability logic? This is an approach introduced by myself some time ago, which I characterize as a formal theory of computability in the same sense as the classical logic is a formal theory of truth. Let us compare the two logics to see what this means.

In classical logic the central semantical concept is truth, formulas represent statements, and the main utility of classical logic is that it provides a systematic answer to the questions: 1) Is P (always) true? 2) Does truth of P (always) follow from truth of Q?

Now, what is computability logic and how does it differ from this? Everything is the same, except that truth is replaced with computability. So the central semantical concept here is computability. Formulas represent not just statements as before, but computational problems, as computability is a property of computational problems. And the logic provides a systematic answer to the questions: 1) Is P (always) computable? 2) Does computability of P (always) follow from computability of Q? Moreover, it provides answers to these two questions in a constructive sense. Namely, when it establishes that P is computable, it not only merely establishes existence of a computation (algorithm) for P, but rather also tells us exactly how to compute P; similarly it tells us how to construct an algorithm for P from an algorithm for Q.

Things are naturally arranged in such a way that classical statements (predicates, propositions) are special cases of computational problems, and classical truth is a special, simplest case of computability. This eventually makes classical logic a conservative fragment of computability logic. Conservative fragment in the sense that the language of CoL is more expressive than that of classical logic, containing the latter just as a fragment, but if we restrict the language back to that of classical logic, CoL's semantics validates nothing less and nothing more than what the classical semantics does.

Anyway, first and foremost, we want to agree on our understanding of what computational problems are. If you go back to Church, computational problems are just functions to be computed. But for us the understanding of computational problems is more general. Namely, a computational problem is a zero-sum game between a machine, symbolically namd T, and its environment, symbolically named ±. Functions are just special cases of such games, but otherwise here we have computational problems of arbitrary degrees of interactivity (the problem of computing a function is not very interactive, with only two steps involved: receiving an input and generating an output).

I don't specify or define exactly what is meant by a "machine", but let us understand it as an algorithm. So, instead of a machine you can always think of an interactive algorithm, some mechanical procedure to follow. We say a machine M wins (computes, solves) a game/problem G iff M wins G regardless of how the environment acts. Such an M is said to be an algorithmic solution (algorithmic winning strategy) for G. A problem/game is computable iff it has an algorithmic solution.

CoL's logical operators (connectives, quantifiers) represent operations on games. There is a whole zoo of operators, but in this presentation we only consider the modest fragment of it consisting of

-, A, v, V, 3, n, u, n, LI.

Here we see all operators of classical logic and something in addition to that. These operators, just like all operators of CoL for that matter, are operations on games. The classically-shaped operators generalize their classical counterparts, and do so in a conservative fashion. That is in the sense that such operators automatically retain their classical meanings when applied to statements that also happen to be legitimate sentences of classical logic.

A run of a game is a sequence of moves, each one prefixed with T or ± to indicate which player has made the move. I am not giving any formal definitions in this presentation, but of course they do exist.

Atomic sentences such as 2 + 2 = 4 are moveless games, that is, games whose only legal run is the empty run (). Such a game is won by the machine if the sentence is true in the classical sense, and lost if false. The same can be seen to be the case for all formulas built from atoms using only -, A, V, V, 3 once we define these operators.

So, (the empty run of) 2 + 2 = 4 or Vx(x = x) is won by the machine, and 2 + 2 = 5 or 0 = 1 A 2 = 2 is lost. This means the problem/game 2 + 2 = 4 or Vx(x = x) is computable (by a machine that does nothing) while 2 + 2 = 5 or 0 = 1 A 2 = 2 is incomputable.

The bottom line to remember here is that formulas of classical logic represent moveless games. Some may ask in frustration: "how can we call something a game if there are no moves in it?!" Well, let me remind you the situation with the concept of zero. The Romans did not have this concept, and the subsequent European tradition for a long time resisted the idea of accepting zero as a legitimate number. The attitude was that a number is supposed to represent a quantity, but zero means no quantity at all, so it is not a meaningful number. But now we know that we can not do much in mathematics without zero. Similarly, the above moveless sorts of games make perfect sense under CoL's approach.

Now let us look at the operations on games that we will be dealing with.

First comes the choice conjunction of two games A and B, written A n B and read "A chand B". This is a game where the first legal move is (only) by the environment, which should choose between the two available moves "left" and " right". After that the game continues according to the rules of the chosen component A or B, respectively. This choice is not only a privilege of the environment, but also an obligation, because if the environment fails to make a move (choice) here, then it loses, with the machine correspondingly considered to be the winner.

Choice disjunction (" chor") A U B is similar, with the difference that here it is the machine which makes the first move/choice and which loses if no such move is made. So the roles of the machine and the environment are interchanged here.

The choice universal quantification of a game A(x) is in fact a "big choice conjunction" in the sense that this game, written nxA(x) and read "chall xA(x)", is a game where the first legal move is, just like in the case of choice conjunction, by the environment, which, however, instead of choosing between left and right, simply chooses a natural number n, after which the game continues as A(n). If no such move is made, then the environment loses.

Choice existential quantification UxA(x) (" chexists xA(x)") is similar, with the difference that, in it, it is the machine which makes the first move and which loses if no move is made.

Here is an example. (±6, Tle/t) is a machine-won run of the game nx(Even(x) U Odd(x)), and the following sequence shows how it "modifies" the game:

nx(Even(x) U Odd(x)) ^ Even(6) U Odd(6) ^ Even(6).

According to this scenario, the environment chose 6 for x, after which the game continued as Even(6) U Odd(6). In response, the machine made the move left, meaning that the left component was chosen, so the game continued as Even(6). It also ended as Even(6), because tno further moves were made (or could have been made). The machine won as Even(6) is a true proposition, and true propositions are automatically won by the machine.

Negation -, read as " not", is a role switch operation: the moves and wins of the machine become those of the environment, and vice versa. More precisely: for a run $, let the negative image of $ mean the result of changing all labels in $ to their opposite. For instance, the negative image of (±6, Tle/t) is (T6, ±le/t). Then, given a game G, -G is the game whose legal runs are the negative images of those of G, that is, the environment and the machine have interchanged their moves, and where a given player wins a given run of the

game iff the other player wins, in the sense of G, the negative image of that run.

Example: we have -nx(Even(x) L Odd(x)) = Lx(Odd(x) n Even(x)). Indeed, consider the negation of nx(Even(x) L Odd(x)). Without a negation this is a universally quantified game, meaning that the environment makes the first move in it. But negation interchanges players' roles, so now it is the machine that can make the first move. That is why nx becomes Lx. For similar reasons, L becomes n, Even becomes Odd (= -Even) and Odd becomes Even.

We can see that DeMorgan's laws remain valid with choice operators. In fact, CoL has several (4+) sorts of conjunctions, disjunctions and quantifiers, and DeMorgan's laws go through for all of them.

When applied to an atomic sentence S (or any moveless game for that matter), - behaves exactly like classical negation, because the only legal run in games represented by atomic sentences is the empty run (), and the negative image of () is the same (); so, only the winners are interchanged, and the machine wins this "run" of S (in other words, S is true) iff the environment wins it in game -S (that is, S is false).

The same can be seen to be the case with all operators that look like operators of classical logic: A, V, V, 3. When applied to moveless games, the behavior of such operators is exactly classical. Their new meanings coincide with their classical meanings, and this happens naturally/automatically rather than being "manually" postulated.

Whether S is moveless or not, we have --S = S. The double negation principle remains valid, because interchanging the roles twice brings each player to its original role.

The parallel conjunction of games A and B, written A A B and read "A pand B", is a game where A and B are played simultaneously (no choice is made between the two), and where, in order to win, the machine needs to win in both components. To indicate in which component a given move is made, it should be prefixed with "left" or "right".

Parallel disjunction ("por") A V B is similar, with the only difference that here winning in just one component is sufficient.

Example: (bright.6, Tle/t.6, ±le/t.le/t, Tright.le/t) is a legal run of the game

Lx(Odd(x) n Even(x)) V nx(Even(x) L Odd(x)).

The meaning of the first move bright.6 is that the environment chooses 6 for x in the right V-disjunct. As a result, the game turns it into

Lx(Odd(x) n Even(x)) V (Even(6) L Odd(6)).

The move Tle/t.6 made by the machine in response signifies choosing the same number 6 for x in the left V-disjunct, and the game correspondingly continues

as

(Odd(6) n Even(6)) V (Even(6) U Odd(6)).

The next move ±le/t.le/t by the environment, whose meaning is choosing the left n-conjunct in the left V-disjunct, further brings the game down Odd(6) V (Even(6) U Odd(6)). The final move Tright.le/t made by the machine turns it into Odd(6) V Even(6)). The machine wins because Even(6) is true and thus it wins in one of the two V-disjuncts.

Parallel implication ("pimplication") ^ is simply understood as a standard abbreviation: A ^ B =df -A V B. Due to the prefix -, the players' roles are interchanged in A, turning it into a computational resource that can be used by T in solving B. The computational intuition associated with this combination is the most interesting one as, intuitively, A ^ B can be seen to be the problem of reducing B to A. Here is an example. Consider the following two predicates:

• Halts(x,y) = "Turing machine x halts on input y";

• Accepts (x, y) = "Turing machine x accepts input y".

The renowned Halting problem as a decision problem can be written this way: nxny(Halts(x,y)U-Halts(x,y)). In this game the environment chooses some particular values for x and y, and the machine should reply by telling whether x halts on y (Tleft) or not (Tright). Similarly for the Acceptance problem nxny(Accepts(x, y) U -Accepts(x,y)). Both problems are known to be un-decidable. But the pimplication from one to the other can be shown to be computable.

Halting problem Acceptance problem

,-A-^ ,-A-^

nxny(Halts(x, y) U -Halts(x, y)) ^ nxny(Accepts(x, y) U -Accepts(x, y))

Reduction of the acceptance problem to the halting problem

The winning strategy here relies on reducing the consequent to the antecedent. Roughly it goes like this. While both players have initial legal moves in this game, the machine waits until the environment makes a choice of some

values m and n for x and y in the consequent (otherwise it loses) and thus brings the game down to

nxny(Halts(x, y) L -Halts(x, y)) ^ Accepts(m, n) L -Accepts(m, n).

Then the machine chooses the same m and n in the antecedent, so the game continues as

Halts(m, n) L -Halts(m, n) ^ Accepts(m, n) L -Accepts(m, n).

Now, again, both players have legal moves: T in the consequent and ± in the antecedent, but it is wise for the machine to wait to let the environment move first. If the environment fails to make a move, the environment loses in the antecedent and thus the machine wins the overall game. So, assume the environment makes a move in the antecedent, telling whether m halts on n or not. We may assume that whatever the environment says here is true, or else it loses. If it says that m does not halt on n, then the machine chooses -Accepts(m, n) in the consequent and wins because if m does not halt, then it does not accept either. Otherwise, if the environment says that m halts on n, the machine simulates the work of m on input n, and this simulation will show whether the halting was with acceptance or rejection. The machine correspondingly chooses between Accepts(m, n) and -Accepts(m, n) in the consequent and wins.

What we call blind quantifiers are V (" blall") and 3 (" blexists"). 3 is the DeMorgan dual of V (3 = -V-), so let us just look at the blind universal quantification VxA(x). Unlike n, no move is associated with V, i.e., no value for x is specified (by either player); in order to win, the machine needs to play "blindly" in a way that guarantees success in A(x) for every possible value of x. As an example, consider the game

Vx(Even(x) L Odd(x) ^ ny(Even(x + y) L Odd(x + y))).

The following scenario unfolds according to the run (bright.5, ±left.nght, Tright.left): The environment chooses 5 for y in the consequent, bringing the game down to

Vx(Even(x) L Odd(x) ^ Even(x + 5) L Odd(x + 5)).

The machine has to tell whether x + 5 is even or odd. But it does not know x. Can it still determine the parity of x + 5? Yes, it can, if the machine just knows whether x is even or odd without otherwise knowing what particular number x is. And this information about the parity of x can be obtained from the antecedent, where the environment is obligated to move. By making the move ±left.right and bringing the game down to Vx(Odd(x) ^ Even(x + 5) L

Odd(x + 5)), the environment says that x is odd. But then x + y is even, so the final move Tright.le/t, resulting in Vx(Odd(x) ^ Even(x + 5)), makes the machine the winner. Notice how V persisted in the above sequence of games, and so did Generally, the classical structure of a game when it evolves in this fashion will remain unchanged, all changes will happen exclusively in choice (n, U, n, U) components.

2. Clarithmetics: formal number theories based on CoL

In CoL, the standard concepts of time and space complexities are naturally and conservatively generalized to the interactive level, making them meaningful for games. Also, a new complexity measure — amplitude complexity — is introduced. It is concerned with the sizes of T's moves relative to the sizes of ±'s moves.

Instead of computability-in-principle, we can now just as meaningfully speak about computability with limited resources (amplitude, space, time). For instance, polynomial space computability rather than just computability.

Example: each of the problems is the left column below is polynomial time computable. The right column tells us what this means in standard terms.

nxny(x = y U x = y) " =" is polynomial time

decidable

nxnyUz(z = x + y) " +" is polynomial time

computable

nx(3y(x = y x y) ^ Uy(x = y x y)) "square root", when exists, is

polynomial time computable

nxUy(p(x) o q(y)) p is polynomial time reducible

(A o B abbreviates (A ^ B) A (B ^ A)) to q

Clarithmetics are formal number theories based on CoL in the same sense as PA (Peano arithmetic) is based on classical logic, usually in the language whose logical vocabulary is the one that we surveyed in the preceding section: -, A, V, V, 3, n, U, n, U. The non-logical vocabulary is standard: 0,', +, x, = (x' means x + 1). We need some technical preliminaries before we proceed.

• By a bound we mean a pterm (term or pseudoterm) for some monotone function (the term was coined by George Boolos). Pseudoterms are not really terms strictly speaking, but one can pretend that we do have special terms for them in the language. Nothing will change this way because

each formula with pseudoterms can be equivalently rewritten as one with only real terms. Terminologically we identify a pterm with the function it represents.

• A boundclass is a set of bounds closed under variable renaming.

• A tricomplexity is a triple R = (RampZiiude, Rspace, Riime) of boundclasses satisfying certain regularity conditions. I will not show here those conditions, but they are natural. Suffice it to say that all naturally emerging tricomplexities are regular as long as Ramp1iiude is at least linear (contains all linear functions), Rspace at least logarithmic, and Riime at least polynomial.

• An R tricomplexity solution of a given problem G is a solution for G whose amplitude complexity is upper-bounded by some bound from Ramp1iiude, space complexity by some bound from Rspace, and time complexity by some bound from Riime. In other words, this is a T's winning strategy/algorithm that runs in time from Riime, in space from Rspace and in amplitude from Rampiitude.

• |x| is the standard arithmetization of function "the length of the binary representation of x", i.e., of " |~log2(x + 1)]".

• Bit(x, y) is the standard arithmetization of the predicate "the y'th least significant bit of the binary representation of x is 1", i.e., of " |_x/2yJ mod 2 = 1".

• s (resp. |s|) means a tuple (si,...,sn) (resp. (|si |,..., |sn|)), where si,..., sn are variables.

• Let B be a boundclass. We say that a formula F is B-bounded iff every n-subformula of F has the form nx(|x| < b(|s|) ^ F) and every L-subformula has the form Lx(|x| < b(|s|) A F), where x, s are pairwise distinct variables not bound by V, 3 in F, and b is a bound from B.

A number of systems of clarithmetic were developed for various purposes, corresponding to various complexity classes. In this presentation we will look only at one system of clarithmetic, called CLA11. It is a scheme of clarithmet-ical theories rather than a particular theory, generating a particular theory CLA11r per each tricomplexity R.

We fix some tricomplexity R — (RampHiUde, Rspace, Rtime) to define the corresponding theory CLA11R. The logical basis (axioms, rules) of such a theory is a certain axiomatization CL12 of CoL, sound and complete in a very

strong sense. It is the logical basis of CLA11R in the same sense as classical logic is a logical basis of Peano arithmetic. Having said that, here we shall only focus on the nonlogical postulates of CLA11R.

The nonlogical axioms of CLA11R are the following:

1. Vx-(0 = x')

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

2. VxVy(x' = y' ^ x = y)

3. Vx(x + 0 = x)

4. VxVy(x + y' = (x + y)')

5. Vx(x x 0 = 0)

6. VxVy(x x y' = (x x y) + x)

7. The V-closureof F(0)AVx(F(x) ^ F(x')) ^ VxF(x) for each n, L, n, L-free formula F

8. nxLy(y = x')

9. nxLy(y = |x|)

10. nxny(Bit(x,y) L-Bit(x,y))

Formulas (1)-(7) are nothing but the so-called Peano axioms, that is, axioms of standard PA. Of course, (7) is a scheme of axioms, generating infinitely many particular axioms. In addition, we have three extra-Peano axioms (8)-(10). The first one expresses the computability of the successor function. The second one expresses the computability of the logarithm function. The third one says that the question of telling whether the y'th least significant bit of (the binary representation of) x is 1 or not is decideable.

Along with CLA11r we also consider CLA11R + TA, where TA stands for "Truth Arithmetic". CLA11R + TA differs from CLA11R in that its axioms include all true sentences of classical arithmetic (rather than just Peano axioms) as additional axioms. Of course, this system, unlike CLA11R, is no longer recursively axiomatizable.

On top of axioms, CLA11R has the following two nonlogical rules of inference:

F(0) nx(F(x) ^ F(x')) • Induction —

nx < b(|s|)F(x)

where x and s are pairwise distinct variables, F(x) is an Rspace-bounded formula, and b is a bound from Riime;

. n , • ny(p(y) U -p(y))

• Comprehension

U|x| < b(|s|)Vy < |x|(Bit(y, x) o p(x))

where x, y and s are pairwise distinct variables, p(y) is a n, U, n, U-free formula not containing x, and b is a bound from Ramplitude.

As we see, the above induction rule, unlike its standard counterpart, has n instead of V. The intuition here is that, if we know how to compute F(0) and we also know, for all x with x < b(|s|), how to reduce F(x') to F(x), then we know how to compute F(x). A restriction is that F(x) should be an Rspace-bounded formula and the bound b should be a bound taken from Rtime.

The intuition associated with the comprehension rule is that, if you can decide the predicate p, then you can generate (hence, chexists) the number x such that, for any n, the n-th least significant bit of x is a 1 if an only if p is true of n. The decidable predicate p thus generates its own number in the sense that it tells us each bit of the binary representation of x. Again, we have a restriction here, according to which b should be a bound from Ramplitude.

Now we are ready to state the main result below after couple of additional terminological conventions. By an arithmetical problem we mean a problem/game that is expressed by some formula of the language of CLA11R. We say that such a problem is representable in CLA11R if it is expressed by some theorem of CLA11R. Note that the same arithmetical problem can be expressed by many different formulas and, in order for the problem to qualify as representable, it is sufficient that just one of such formulas be provable.

Theorem 1. For any tricomplexity R, the following holds:

• Constructive soundness: For every theorem F of CLA11R + TA,

the problem expressed by F has an R tricomplexity solution, and such a solution can be automatically extracted from a proof of F.

• Extensional completeness: Every arithmetical problem with an R tricomplexity solution is representable in CLA11R.

• Intensional completeness: Every formula expressing a problem with an R tricomplexity solution is provable in CLA11R + TA.

The R parameter of CLA11R can be turned in a mechanical, brute force, "canonical" way to obtain a sound and complete theory with respect to the target computational tricomplexity. For instance, for

linear amplitude + logarithmic space + polynomial time,

we can choose the R with

Ramp1iiude = {all terms built from x using 0,', +} — you can see that these are exactly the terms that describe all linear functions.

Rspace = {all pterms built from |x| using 0,', +} — this can be seen as the set of pterms that express all logarithmic functions.

Riime = {all terms built from x using 0,', +, x} — again, you can see that these are exactly those terms that express polynomial functions.

In a similar fashion, with very little effort, you can generate instances of CLA11 for a huge variety of tricomplexities. All reasonable complexity triples can be captured this way, such as:

• polynomial amplitude + polynomial space + polynomial time

• linear amplitude + linear space + quasipolynomial time

• linear amplitude + polynomial space + exponential time

• superexponential amplitude + elementary space + primitive recursive time ... you just name it!

3. Potential applications of clarithmetics

The two main motivations for studying CLA11 and clarithmetics in general are that (1) such theories can be seen as declarative programming languages in an extreme sense, and (2) they can potentially be used (perhaps, hopefully) as a tool for separating computational classes.

(1) CLA11 as a declarative programming language

Let us say you want a program computing the integer square root function, such that this program runs in polynomial time, logarithmic space and linear amplitude. All you need to do to get such a program is this:

1. Instantiate the R parameter with the corresponding tricomplexity (cf. the end of Section 2.);

2. Write the formula nx(3y(x = y x y) ^ Ly(x = y x y)) expressing your goal in the language of CLA11R;

3. Find a CLA11R-proof of this formula.

Once you find a proof, the compiler then extracts the sought program from it. With nx(3y(x = y x y) ^ Ly(x = y x y)) being in fact a specification of such a program, the proof automatically also serves as a formal verification of

the fact that the program meets its specification. This way, the notorious and generally undecidable problem of program verification is fully neutralized.

Other than that, the proof lines can be seen as (the best possible) comments for the corresponding steps of the program.

Furthermore, — this is from the realm of fiction at this point, but still — if we develop reasonably efficient theorem-provers, step 3 (finding the proof) can be delegated to the compiler, and the "programmer"'s job will be just to write the goal/specification rh(3y(x = y x y) ^ _y(x = y x y)). All of us can thus qualify as programmers, because we all can very easily understand the language in which such a formula is written.

(2) CLA11 as a tool for separating computational complexity classes.

The main open problems in the theory of computation are about separating various naturally emerging computational complexity classes, with the P versus NP problem being the best known example of problems of this kind. To show that two (tri)complexity classes R1 and R2 are not equal, it is sufficient to separate the two theories CLA11R1 and CLA11R2. Separating theories should to be easier than separating complexities directly. After all, we have seen some very impressive success stories of separating theories, such as separating different versions of set theory or proving independence results. We do not have comparable success stories in separating complexities, which mostly has been just a fruitless struggle so far.

An extensive online survey of CoL and CoL-based applied theories can be found at

www.csc.villanova.edu/~japaridz/CL/.

Most relevant publications: [Japaridze 2015], [Japaridze 2016c], [Japaridze 2014], [Japaridze 2015], [Japaridze 2016a], [Japaridze 2016b], [Japaridze 2016c].

Acknowledgements. Paper is RAS Institute of Philosophy Seminar Talk, 14 November 2018 (See video https://youtu.be/w9-1Xm8MMmQ ).

References

Japaridze 2010 - Japaridze, G. "Towards applied theories based on computability logic", Journal of Symbolic Logic, 2010, Vol. 75, pp. 565-601. Japaridze 2011 - Japaridze, G. "Introduction to clarithmetic I", Information and

Computation, 2011, Vol. 209, pp. 1312-1354. Japaridze 2014 - Japaridze, G. "Introduction to clarithmetic III", Annals of Pure and

Applied Logic, 2014, Vol. 165, pp. 241-252. Japaridze 2015 - Japaridze, G. "On the system CL12 of computability logic", Logical Methods is Computer Science, 2015, Vol. 11, No. 3, Paper 1, pp. 1-71.

Japaridze 2016a - Japaridze, G. "Build your own clarithmetic I: Setup and completeness", Logical Methods is Computer Science, 2016, Vol. 12, No. 3, Paper 8, pp. 1-59.

Japaridze 2016b - Japaridze, G. "Build your own clarithmetic II: Setup and completeness", Logical Methods is Computer Science, 2016, Vol. 12, No. 3, Paper 12, pp. 1-62.

Japaridze 2016c - Japaridze, G. "Introduction to clarithmetic II", Information and Computation, 2016, Vol. 247, pp. 290-312.

i Надоели баннеры? Вы всегда можете отключить рекламу.