Here you find Alonzo Church's review of Alan Turing's groundbreakin...
The Entscheidungsproblem, or the "decision problem," posed by Germa...
Alonzo Church was actually the first to coin the term "Turing Machi...
An effective method in mathematics is a systematic, step-by-step pr...
The deducibility problem, in the context of first-order logic, ref...
Church is explaining that Turing's result shows this deducibility p...
Turing added an appendix to his paper after he learned about Alonzo...
The cited paper was ["An unsolvable problem of elementary number th...
Here Church is alluding to the equivalence of three different notio...
Turing would go on to become a graduate student at Princeton Univer...
42
REVIEWS
H.
B.
CURRY.
A
note
on the
associative
law
in
logical
algebras.
Bulletin
of the
American
Mathematical Society, vol. 42 (1936), pp. 523-524.
A generalization
of
Bernays' proof
of
the redundancy
of
the associative law
in
Part
I
Section A
of Principia mathematica, showing that
any
system, containing
a
binary operation (denoted
by
juxtaposition) and
a
relation,
<,
permitting inference, such that
(1)
p<PP, (2) pq<p, (3) Pq<qP,
(4)
//p<q
and
q<r,
then
p<r,
and either (5) //p<q,lhenrp<rq,oi (6) //p<q
and
p<r,
then
P<qr,
must also contain all forms of the associative law.
PAUL HENLK
V.
C.
ALDRICH.
Renegade
instances.
Philosophy
of
science, vol.
3
(1936), pp. 506-514.
If the sentence, "Swans are birds," stand
for an
empirical generalization, and
is
therefore syn-
thetic,
it
may
be
invalidated by negative
instances.
If,
however, it stand for an analytic proposition
the intension
of
swan including
the
property
of
being
a
bird—then, since negative instances
are
impossible, the proposition cannot be invalidated by
one.
Nonetheless
we
may find instances having
the defining properties of swan except part of the intension
of
bird (e.g.,
biped).
A
sufficient number
of such, or a single impressive one, may render our definition of swan inconvenient, thereby resulting
in our redefining the word. Such instances Mr. Aldrich calls "renegade." Whether such
an
instance
does "renegade" the a
priori
or definitional generalization depends wholly upon pragmatic considera-
tions.
The author presents no discussion
of
these considerations,
of
their bearing on scientific classifi-
cation,
or of the
origin
of
such
a
priori generalizations
in
the empirical sciences.
EVERETT
J.
NELSON
RUDOLF
CARNAP
and
FRIEDRICH
BACHMANN.
Ober
Extremalaxiome.
Erkenntnis,
vol. 6
(1936),
pp.
166-188.
The authors discuss axiom systems
of
the following sort. Superposed upon
a
finite sequence of
axioms
(the
"Rumpfsystem") each
of
which makes
a
certain assertion with regard
to the
funda-
mental concepts employed, appears
a
final axiom seemingly concerning
the
preceding axioms
and
not related
to the
fundamental concepts
of the
system. The best known such axiom-system
is
that
of Hilbert
for
Euclidean geometry, with
its
famous "Axiom
of
Completeness." Whether
the
final
axiom states that
no
more inclusive system
of
things exists which satisfies
the
preceding—and
is
therefore
a
"maximal" axiom—or is analogously
a
"minimal" axiom, such
a
final
axiom will be called
an "extremal" axiom.
The
authors defend
the use of
such axioms under suitable restrictions
and
when properly stated and interpreted.
A
fundamental concept in the study of axiomatics
is
the notion
of isomorphism which the authors extend,
by
the concept
of
correlators
which are binary relations
between given n-ary relations. Complete isomorphism is discussed with respect
to
types of like speci-
fied order.
If any two
structures satisfying
the
"Rumpfsystem"
are
completely isomorphic
as to
elements of specified order, one may then inquire as to whether such
a
structure does or
does
not have
a proper substructure isomorphic with
it.
Distinction
is
made between extensions
of
model and ex-
tension
of
structure. The legitimate introduction
of
the extremal axiom corresponds to the selection
of extremal structures.
The
question
of
independence
of the
axioms
in the
"Rumpfsystem"
as
affected
by the
introduction
of an
extremal postulate
is
discussed
and
various cases
are
found
to
occur.
A
final serious question arises with regard
to
extension
to a
system
of
different order-type,
as occurs from
the
system
of
rational numbers
to
that
of
real numbers regarded
as
sequences
of
rationals. Tarski's restriction
to an
increase
of
one unit in order type has many attractive features,
and avoids certain serious difficulties,
but is
found
to be
somewhat too restrictive.
ALBERT
A.
BENNETT
A. M.
TURING.
On
computable
numbers,
with
an
application
to the
Entscheidungsproblcm.
Pro-
ceedings
of the
London Mathematical Society, 2
s.
vol. 42 (1936-7), pp. 230-265.
The author proposes
as a
criterion that
an
infinite sequence
of
digits
0
and 1
be
"computable"
that
it
shall
be
possible
to
devise
a
computing machine, occupying
a
finite space
and
with working
parts
of
finite size, which will write down
the
sequence
to any
desired number
of
terms
if
allowed
to
run for a
sufficiently long time. As
a
matter
of
convenience, certain further restrictions
are im-
posed
on the
character
of the
machine,
but
these are
of
such
a
nature
as
obviously
to
cause no loss
of generality—in particular,
a
human calculator, provided with pencil
and
paper
and
explicit
in-
REVIEWS
43
structions, can be regarded as a kind of Turing machine. It is thus immediately clear that computa-
bility, so defined, can be identified with (especially, is no less general than) the notion of effectiveness
as it appears in certain mathematical problems (various forms of the Entscheidungsproblem, various
problems to find complete sets of invariants in topology, group theory, etc., and in general any prob-
lem which concerns the discovery of an algorithm).
The principal result is that there exist sequences (well-defined on classical grounds) which are
not computable. In particular the deducibility problem of the functional calculus of first order (Hilbert
and Ackermann's engere Funktionenkalkiil) is unsolvable in the sense that, if the formulas of this
calculus are enumerated in a straightforward manner, the sequence whose nth term is 0 or 1, according
as the nth formula in the enumeration is or is not deducible, is not computable. (The proof here re-
quires some correction in matters of detail.)
In an appendix the author sketches a proof of equivalence of "computability" in his sense and
"effective calculability" in the sense of the present reviewer {American journal of mathematics,
vol.
58 (1936), pp. 345-363, see review in this
JOURNAL,
vol. 1, pp. 73-74). The author's result con-
cerning the existence of uncomputable sequences was also anticipated, in terms of effective calcula-
bility, in the cited paper. His work was, however, done independently, being nearly complete and
known in substance to a number of persons at the time that the paper appeared.
As a matter of fact, there is involved here the equivalence of three different notions: computa-
bility by a Turing machine, general recursiveness in the sense of Herbrand-Godel-Kleene, and X-de-
finability in the sense of Kleene and the present reviewer. Of these, the first has the advantage of
making the identification with effectiveness in the ordinary (not explicitly defined) sense evident
immediately—i.e. without the necessity of proving preliminary theorems. The second and third
have the advantage of suitability for embodiment in a system of symbolic logic.
ALONZO CHURCH
EMILL. POST.
Finite combinatory processes—formulation 1. The journal of symbolic logic, vol.
1(1936), pp. 103-105.
The author proposes a definition of "finite 1-process" which is similar in formulation, and in
fact equivalent, to computation by a Turing machine (see the preceding review). He does not, how-
ever, regard his formulation as certainly to be identified with effectiveness in the ordinary sense,
but takes this identification as a "working hypothesis" in need of continual verification. To this the
reviewer would object that effectiveness in the ordinary sense has not been given an exact definition,
and hence the working hypothesis in question has not an exact meaning. To define effectiveness as
computability by an arbitrary machine, subject to restrictions of finiteness, would seem to be an
adequate representation of the ordinary notion, and if this is done the need for a working hypothesis
disappears.
The present paper was written independently of Turing's, which was at the time in press but
had not yet
appeared.
ALONZO CHURCH
H. B. SiiiTH. The
algebra
of propositions. Philosophy of science, vol. 3 (1936), pp. 551-578.
The author is proposing a calculus of propositions based on four primitive ideas: disjunction
p+q,
conjunction pq, negation p', and implication p£q. Although not explicitly stated, it is appar-
ently intended that the first three operations shall obey all the usual laws. The implication p L
q
is not,
however, to be identified with p'+q, and is thus in some degree analogous to C. I. Lewis's p-iq. A
modal operation \p\, analogous to Lewis's Qp, is defined as (p
/.0)',
where 0 is the null-proposition
(a proposition q such that
q/.q').
Equivalence is expressed by p =
q,
apparently to be defined as
(P£q){qlp).
On intuitive grounds not entirely clear, the author requires that "all modal distinctions" shall
be recognized. That is, let two expressions be formed from the letter p, each by a finite number of
applications of negation and the modal operation, negation being nowhere applied twice in succession
(i.e.
without one or more intervening applications of the modal operation); then these two expressions
shall not be assumed equivalent unless they are identical expressions.
An immediate difficulty is that if we assume (1)
p/-\\p\
'| 'and (2) (p £q)(q £r) Z(p Zr) and the
principle of inference (3), "If P and PQ Z R then QIR," then it is possible to infer \p\'=•=
111
p\ '\ '\'.
This the author meets by rejecting (2). (In connection with an earlier note on this same point, the

Discussion

Alonzo Church was actually the first to coin the term "Turing Machine" for Turing's universal machine. The cited paper was ["An unsolvable problem of elementary number theory"](https://www3.risc.jku.at/people/schreine/courses/compcomp/Church1936.pdf) by Alonzo Church. Church is explaining that Turing's result shows this deducibility problem is unsolvable, meaning there is no general algorithm that can always decide if a formula is deducible or not. Church imagines a long list of all the formulas in first-order logic, arranged in a specific order. Now, imagine creating a new sequence where each term is either a 0 or a 1, with 0 representing that the corresponding formula in the list is not deducible, and 1 representing that it is deducible. Turing's result states that this new sequence of 0s and 1s is not computable. The Entscheidungsproblem, or the "decision problem," posed by German mathematician David Hilbert in the early 20th century, sought to determine whether one could write a general algorithm that could decide the truth or falsity of any mathematical statement within a given logical system. This challenge aimed to explore the limits of mathematical reasoning and the boundaries of what could be computed or decided through systematic methods. By proposing a criterion for computability, Turing laid the foundation for understanding the limits of computation. If an infinite sequence of 0s and 1s can be generated by a Turing machine, then the sequence is considered computable. An effective method in mathematics is a systematic, step-by-step procedure that can be used to solve a problem or make a decision. It must be well-defined, unambiguous, and finite, meaning that it should always produce a result after a finite number of steps. An effective method should be general enough to handle any input within the scope of the problem it addresses and should not rely on intuition or guesswork. Examples of effective methods include long division, the Euclidean algorithm for finding the greatest common divisor, and sorting algorithms in computer science. Turing added an appendix to his paper after he learned about Alonzo Church's work. In the appendix he effectively outlines that the two approaches (Turing and Church's) are equivalent. Here is the note Turing added to the introduction of his paper: > In a recent paper Alonzo Church f has introduced an idea of "effective calculability", which is equivalent to my "computability", but is very differently defined. Church also reaches similar conclusions about the Entscheidungsproblem. The proof of equivalence between "computability" and "effective calculability" is outlined in an appendix to the present paper. Turing would go on to become a graduate student at Princeton University, where he studied under Alonzo Church from 1936 to 1938. During this time, Turing delved deeper into the connections between lambda calculus and Turing machines, ultimately completing his Ph.D. under Church's guidance. In 1938, upon completing his Ph.D., Turing returned to the United Kingdom, even though John von Neumann, a fellow mathematician and a pioneer in the field of computer science, had invited him to stay and work at the Institute for Advanced Study in Princeton. Turing's decision to return to the UK set the stage for his crucial contributions to the British codebreaking efforts during World War II, most notably his work on breaking the Enigma cipher. Here you find Alonzo Church's review of Alan Turing's groundbreaking paper "On computable numbers, with an application to the Entscheidungsproblem". Turing's paper, published in the Proceedings of the London Mathematical Society in 1936-37, introduced the concept of Turing machines, a theoretical model of computation that has had a profound impact on our understanding of computability. Alonzo Church, a renowned mathematician and logician, was a contemporary of Turing and made his own significant contributions to the fields of logic, recursion theory, and computer science. Around the time of Turing's publication, Church was independently working on his approach to the concept of computability, which led to the development of lambda calculus. Lambda calculus and Turing machines, though different in their approach, were later proven to be equivalent in terms of their computational capabilities, forming the basis of the Church-Turing thesis. Here Church is alluding to the equivalence of three different notions of computing. In other words, a function is considered computable if it can be computed by any one of these methods, and these methods are, in a sense, interchangeable: 1. Computability by a Turing machine: A Turing machine is a simple, abstract model of computation that simulates the logic of any computer algorithm. If a function can be computed by a Turing machine, it is considered computable. 2. General recursiveness in the sense of Herbrand-Gödel-Kleene: Recursive functions are a class of functions that are defined based on a finite set of base cases and a set of recursive rules. The concept of general recursiveness was developed independently by Gödel, Herbrand, and Kleene. 3. λ-definability in the sense of Kleene and Church: This concept is based on Church's λ-calculus, a formal system for expressing computation using function abstraction and application, along with variable binding and substitution. The deducibility problem, in the context of first-order logic, refers to the question of whether a given formula can be deduced from a set of axioms and inference rules. In other words, it seeks to determine if there is a valid proof, using the given axioms and rules, that demonstrates the truth of the formula. ### engere Funktionenkalkül In 1928, David Hilbert and Wilhelm Ackermann published a book titled Grundzüge der theoretischen Logik (which translates to Principles of Mathematical Logic). This book is considered the first elementary text clearly grounded in the formalism now known as first-order logic (called the “engere Funktionenkalkül,” the narrower function calculus). The engere Funktionenkalkül provided a formal framework for expressing mathematical statements and theorems using symbols, logical connectives, quantifiers, and a set of axioms and inference rules.