> ## TL;DR: In this presentation given in 1951 A. Turing argues th...
**Alan Mathison Turing** created the modern computer in 1935, today...
In 1948 with only paper and pencil Turing was able to show that a l...
Gödel's incompleteness theorems were a response to Russell and Whit...
True, creativity is a spontaneous process...it's hard for relliable...
> Turing believes that the condition that a machine cannot make mis...
>"intended to carry out any definite rule-of thumb process which co...
Chess and Go (considered as the most complex board game ever invent...
> Thanks to our memory humans still learn faster than intelligent m...
>"Whenever a choice has to be made as to what to do next features o...
>"I suggest that there should be two keys which can be manipulated...
There are great insights in this final paragragh: - machine intel...
This posthumous essay begins an
occasioned
feature in which will appear
documents, usually
translations,
otherwise not readily
available.
Intelligent Machinery, A Heretical Theory*
A. M. TURING
'You cannot make a machine to think for you.' This is a commonplace that
is usually accepted without question. It will be the purpose of this paper
to question it.
Most machinery developed for commercial purposes is intended to carry
out some very specific job, and to carry it out with certainty and consider-
able speed. Very often it does the same series of operations over and over
again without any variety. This fact about the actual machinery available
is a powerful argument to many in favour of the slogan quoted above. To a
mathematical logician this argument is not available, for it has been shown
that there are machines theoretically possible which will do something very
close to thinking. They will, for instance, test the validity of a formal proof
in the system of Principia Mathematica, or even tell of a formula of that
system whether it is provable or disprovable. In the case that the formula is
neither provable nor disprovable such a machine certainly does not behave
in a very satisfactory manner, for it continues to work indefinitely without
producing any result at all, but this cannot be regarded as very different
from the reaction of the mathematicians, who have for instance worked for
hundreds of years on the question as to whether Fermant's last theorem is
true or not. For the case of
machines
of
this
kind a more subtle kind of argu-
ment is necessary. By Godel's famous theorem, or some similar argument,
one can show that however the machine is constructed there are bound to
be cases where the machine fails to give an answer, but a mathematician
would be able to. On the other hand, the machine has certain advantages
over the mathematician. Whatever it does can be relied upon, assuming
no mechanical 'breakdown', whereas the mathematician makes a certain
proportion of mistakes. I believe that this danger of the mathematician
making mistakes is an unavoidable corollary of his power of sometimes hit-
ting upon an entirely new method. This seems to be confirmed by the well
known fact that the most reliable people will not usually hit upon really
new methods.
* © P. N. Furbank, for the Turing estate. Reprinted with permission.
PHILOSOPHIA MATHEMATICA (3) Vol. 4 (1996), pp. 25&-260.
at SUB Bremen on June 20, 2012http://philmat.oxfordjournals.org/Downloaded from
INTELLIGENT MACHINERY
257
My contention
is
that machines
can be
constructed which will simulate
the behaviour
of the
human mind very closely. They will make mistakes
at
times,
and at
times they
may
make
new and
very interesting statements,
and
on the
whole the output
of
them will
be
worth attention
to
the same sort
of extent
as the
output
of a
human mind.
The
content
of
this statement lies
in
the
greater frequency expected
for the
true statements,
and it
cannot,
I
think,
be
given
an
exact statement.
It
would
not, for
instance,
be
sufficient
to
say
simply that
the
machine will make
any
true statement sooner
or
later,
for an
example
of
such
a
machine would
be one
which makes
all
possible statements sooner
or
later.
We
know
how to
construct these,
and
as they would (probably) produce true
and
false statements about equally
frequently, their verdicts would
be
quite worthless.
It
would
be the
actual
reaction
of the
machine
to
circumstances that would prove
my
contention,
if indeed
it can be
proved
at all.
Let
us go
rather more carefully into
the
nature
of
this
'proof. It is
clearly possible
to
produce
a
machine which would give
a
very good
ac-
count
of
itself
for any
range
of
tests,
if the
machine were made sufficiently
elaborate. However, this again would hardly
be
considered
an
adequate
proof.
Such
a
machine would give itself away
by
making
the
same sort
of
mistake over
and
over again,
and
being quite unable
to
correct
itself, or to
be corrected
by
argument from outside.
If the
machine were able
in
some
way
to
'learn
by
experience'
it
would
be
much more impressive.
If
this were
the case there seems
to be no
real reason
why one
should
not
start from
a comparatively simple machine,
and, by
subjecting
it to a
suitable range
of 'experience' transform
it
into
one
which
was
much more elaborate,
and
was able
to
deal with
a far
greater range
of
contingencies. This process
could propably
be
hastened
by a
suitable selection
of the
experiences
to
which
it
was subjected. This might
be
called 'education'.
But
here we have
to
be
careful.
It
would
be
quite easy
to
arrange
the
experiences
in
such
a
way that they automatically caused
the
structure
of the
machine
to
build
up into
a
previously intended form,
and
this would obviously
be a
gross
form
of
cheating, almost
on a par
with having
a man
inside
the
machine.
Here again
the
criterion
as to
what would
be
considered reasonable
in the
way
of
'education' cannot
be put
into mathematical terms,
but I
suggest
that
the
following would
be
adequate
in
practice.
Let us
suppose that
it is
intended that
the
machine shall understand English,
and
that owing
to its
having
no
hands
or
feet,
and not
needing
to eat, not
desiring
to
smoke,
it
will occupy
its
time mostly
in
playing games such
as
Chess
and GO, and
possibly Bridge.
The
machine
is
provided with
a
typewriter keyboard
on
which
any
remarks
to it are
typed,
and it
also types
out any
remarks that
it wishes
to
make.
I
suggest that
the
education
of the
machine should
be
entrusted
to
some highly competent schoolmaster
who is
interested
in the
project
but who is
forbidden
any
detailed knowledge
of the
inner workings
at SUB Bremen on June 20, 2012http://philmat.oxfordjournals.org/Downloaded from
258 TURING
of the machine. The mechanic who has constructed the machine, however,
is permitted to keep the machine in running order, and if he suspects that
the machine has been operating incorrectly may put it back to one of its
previous positions and ask the schoolmaster to repeat his lessons from that
point on, but he may not take any part in the teaching. Since this proce-
dure would only serve to test the bona fides of the mechanic, I need hardly
say that it would not be adopted in the experimental stages. As I see it,
this education process would in practice be an essential to the production
of a reasonably intelligent machine within a reasonably short space of time.
The human analogy alone suggests this.
I may now give some indication of the way in which such a machine
might be expected to function. The machine would incorporate a memory.
This does not need very much explanation. It would simply be a list of all
the statements that had been made to it or by it, and all the moves it had
made and the cards it had played in its games. These would be listed in
chronological order. Besides this straightforward memory there would be a
number of 'indexes of experiences'. To explain this idea I will suggest the
form which one such index might possibly take. It might be an alphabetical
index of the words that had been used giving the 'times' at which they had
been used, so that they could be looked up in the memory. Another such
index might contain patterns of men or parts of a GO board that had
occurred. At comparatively late stages of education the memory might be
extended to include important parts of the configuration of the machine
at each moment, or in other words it would begin to remember what its
thoughts had been. This would give rise to fruitful new forms of indexing.
New forms of index might be introduced on account of special features
observed in the indexes already used. The indexes would be used in this sort
of
way.
Whenever a choice has to be made as to what to do next features of
the present situation are looked up in the indexes available, and the previous
choice in the similar situations, and the outcome, good or bad, is discovered.
The new choice is made accordingly. This raises a number of problems. If
some of the indications are favourable and some are unfavourable what
is one to do? The answer to this will probably differ from machine to
machine and will also vary with its degree of education. At first probably
some quite crude rule will suffice, e.g., to do whichever has the greatest
number of votes in its favour. At a very late stage of education the whole
question of procedure in such cases will probably have been investigated by
the machine
itself,
by means of some kind of index, and this may result in
some highly sophisticated, and, one hopes, highly satisfactory, form of rule.
It seems probable however that the comparatively crude forms of rule will
themselves be reasonably satisfactory, so that progress can on the whole
be made in spite of the crudeness of the choice rules. This seems to be
verified by the fact that Engineering problems are sometimes solved by the
at SUB Bremen on June 20, 2012http://philmat.oxfordjournals.org/Downloaded from
INTELLIGENT MACHINERY 259
crudest rule of thumb procedure which only deals with the most superficial
aspects of the problem, e.g., whether a function increases or decreases with
one of its variables. Another problem raised by this picture of the way
behaviour is determined is the idea of 'favourable outcome'. Without some
such idea, corresponding to the 'pleasure principle' of the psychologists, it
is very difficult to see how to proceed. Certainly it would be most natural
to introduce some such thing into the machine. I suggest that there should
be two keys which can be manipulated by the schoolmaster, and which
represent the ideas of pleasure and pain. At later stages in education the
machine would recognise certain other conditions as desirable owing to their
having been constantly associated in the past with pleasure, and likewise
certain others as undesirable. Certain expressions of anger on the part
of the schoolmaster might, for instance, be recognised as so ominous that
they could never be overlooked, so that the schoolmaster would find that
it became unnecessary to 'apply the cane' any more.
Tb make further suggestions along these lines would perhaps be unfruit-
ful at this stage, as they are likely to consist of nothing more than an
analysis of actual methods of education applied to human children. There
is,
however, one feature that I would like to suggest should be incorporated
in the machines, and that is a 'random element'. Each machine should be
supplied with a tape bearing a random series of
figures,
e.g.,
0
and
1
in equal
quantities, and this series of figures should be used in the choices made by
the machine. This would result in the behaviour of the machine not being
by any means completely determined by the experiences to which it was
subjected, and would have some valuable uses when one was experiment-
ing with it. By faking the choices made one would be able to control the
development of the machine to some extent. One might, for instance, insist
on the choice made being a particular one at, say, 10 particular places, and
this would mean that about one machine in 1024 or more would develop to
as high a degree as the one which had been faked. This cannot very well
be given an accurate statement because of the subjective nature of the idea
of 'degree of development' to say nothing of the fact that the machine that
had been faked might have been also fortunate in its unfaked choices.
Let us now assume, for the sake of argument, that these machines are a
genuine possibility, and look at the consequences of constructing them. To
do so would of course meet with great opposition, unless we have advanced
greatly in religious toleration from the days of Galileo. There would be
great opposition from the intellectuals who were afraid of being put out of
a
job.
It is probable though that the intellectuals would be mistken about
this.
There would be plenty to do in trying, say, to keep one's intelligence
up to the standard set by the machines, for it seems probable that once the
machine thinking method had started, it would not take long to outstrip
our feeble powers. There would be no question of the machines dying, and
at SUB Bremen on June 20, 2012http://philmat.oxfordjournals.org/Downloaded from
36o TURING
they would be able to converse with each other to sharpen their wits. At
some stage therefore we should have to expect the machines to take control,
in the way that is mentioned in Samuel Butler's Erewhon.
ABSTRACT. In this posthumous essay, Turing contends that it may be possible to
construct a machine in which there would be an element of randomness and an
analogue of the pleasure principle of psychology, that could be taught, and that
could eventually be more intelligent than humans.
at SUB Bremen on June 20, 2012http://philmat.oxfordjournals.org/Downloaded from

Discussion

There are great insights in this final paragragh: - machine intelligence will develop quickly - machines will have a survival instinct - they will be interconnected (internet) and learn from each other - machines will take control It is interesting to note that these observations made by Turing date back to 1951. >"intended to carry out any definite rule-of thumb process which could have been done by a human operator working in a disciplined but unintelligent manner" A. Turing talking about the new Electronic Machines > Turing believes that the condition that a machine cannot make mistakes is not a requirement for intelligence. Humans are intelligent beings and make a lot of mistakes. As a mathematician can have new ideas and develop new methods when they make mistakes so can machines. It took 7 years, 150 pages and a lot of mistakes and iterations until Andrew Wiles finally proved [Fermat's Last Theorem](https://en.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27s_Last_Theorem). True, creativity is a spontaneous process...it's hard for relliable ppl to come up with new, creative methods >"I suggest that there should be two keys which can be manipulated by the schoolmaster, and which represent the ideas of pleasure and pain." Hinting at a moralistic view of evaluation of the next state (Positive and Negative reinforcement) > ## TL;DR: In this presentation given in 1951 A. Turing argues that it is possible to build general purpose intelligent machines. He believes that intelligent machines will be interconnected and will learn from each other. He anticipates that machine intelligence will develop quickly and that machines will take over. In 1948 with only paper and pencil Turing was able to show that a large neural network can be configured so that it becomes a **general purpose computer.** This hypothesis was well ahead of its time, and today it remains among the best guesses concerning one of cognitive science’s hardest problems. You can learn about it here: [Alan Turing - Intelligent Machinery (1948)](http://www.alanturing.net/turing_archive/archive/l/l32/L32-001.html) > Thanks to our memory humans still learn faster than intelligent machines. Intelligent machines still lag behind humans in the speed at which they learn. When it comes to mastering video games, **the best deep-learning machines take 200 hours of play to reach the same skill levels that humans achieve in just two hours.** Recent progress has been made by Google's DeepMind, read more here: - [DeepMind - Enabling Continual Learning in Neural Networks](https://deepmind.com/blog/enabling-continual-learning-in-neural-networks/) - [How DeepMind’s Memory Trick Helps AI Learn Faster](https://www.technologyreview.com/s/603868/how-deepminds-memory-trick-helps-ai-learn-faster/) Chess and Go (considered as the most complex board game ever invented) have already been mastered by AI. **No human can beat machines in a classical time control chess game** (find a list of the best machines below). AlphaGo a narrow AI program developed by DeepMind first defeated a professional human player, and then went on to beat one of the Go world's top player. You can learn more about the advances here: - [Complete rating list of chess machines](http://www.computerchess.org.uk/ccrl/4040/rating_list_all.html) - [Deep Blue, chess computer](https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer) - [Google reveals secret test of AI bot to beat top Go players](http://www.nature.com/news/google-reveals-secret-test-of-ai-bot-to-beat-top-go-players-1.21253) - [Google’s AI Wins Fifth And Final Game Against Go Genius Lee Sedol](https://www.wired.com/2016/03/googles-ai-wins-fifth-final-game-go-genius-lee-sedol/) - [AlphaGo wikepedia](https://en.wikipedia.org/wiki/AlphaGo) This is a great resource to learn more about Gödel's Incompleteness Theorems: [Stanford Encyclopedia of Philosophy - Gödel's Incompleteness Theorems](https://plato.stanford.edu/entries/goedel-incompleteness/) **Alan Mathison Turing** created the modern computer in 1935, today all computers *Turing machines.* During his short (he died at the age of 41) but remarkable career, Turing had no great interest publishing his research. Only few people are familiar with Turing's fascinating anticipation of connectionism, or neuron-like computing. Turing pioneered the field of artificial intelligence (AI). > Turing pioneered the field of artificial intelligence Alan Turing gave this presentation 'Intelligent Machinery, A Heretical Theory' on a radio program called The '51 Society broadcast by BBC, in 1951. This presentation was followed by a panel discussion with the mathematicians Max Newman and Peter Hilton and the philosopher Michael Polanyi. In this presentation, Turing is going to target the claim that *you cannot make a machine think for you* with which he seems to disagree. ![A. Turing](https://upload.wikimedia.org/wikipedia/commons/a/a1/Alan_Turing_Aged_16.jpg) >"Whenever a choice has to be made as to what to do next features of the present situation are looked up in the indexes available, and the previous choice in the similar situations, and the outcome, good or bad, is discovered. The new choice is made accordingly. This raises a number of problems. If some of the indications are favourable and some are unfavourable what is one to do? The answer to this will probably differ from machine to machine and will also vary with its degree of education. At first probably some quite crude rule will suffice, e.g., to do whichever has the greatest number of votes in its favour. At a very late stage of education the whole question of procedure in such cases will probably have been investigated by the machine itself, by means of some kind of index, and this may result in some highly sophisticated, and, one hopes, highly satisfactory, form of rule." The policy to select the next state will grow from greedy selection in its first stages to highly complex evaluation of the next state (complex heuristic gradually built by experiences) Gödel's incompleteness theorems were a response to Russell and Whitehead's *Principia Mathematica*, an attempt to create a formalized mathematical system described by a set of axioms and inference rules that could prove any mathematical statement. Twenty years later, Gödel published his incompleteness theorems, the first of which stated that in any formal axiomatic system there will be true statements that cannot be proven. Therefore, *there does not exist a mechanistic method for enumerating over all true statements or theorems in the language of that formal system*, in this case, what Turing refers to here as 'the machine.' This statement (along with further work done by Alonzo Church and Turing) effectively ended Hilbert's *Entscheidungsproblem* question: which asked whether there was an algorithmic procedure for proving whether a given proposition is universally valid. Gödel's second theorem also applies here - it says that any consistent system cannot prove its own consistency. In this case, 'the machine' can validate (i.e. prove correct) a subset of itself, but it cannot be relied on to validate itself.