What is the paper before this!? It looks interesting as well!
Samuel Butler was a 19th century English novelist, essayist and cri...
The estimation of energy consumption in the human brain is usually ...
## Jacquard Loom During the 1700s, a number of French weavers de...
For the sake of comparison about where we stand today: - Estimate ...
The all-or-none principle states that if a single neuron is stimula...
"Audrey" (Automatic Digit Recognition Unit) was a very early speec...
At the time of publishing of Shannon's paper it had only been 16 ye...
A traditional, or "deterministic" Turing machine, as described by A...
This is known as the halting problem. The halting problem is the pr...
Shannon is likely alluding to the following quote: > Everybody ta...
Hex is a strategy game played on a hexagonal grid. The goal is to c...
Nim is a strategic mathematical game where two players take turns r...
Checkers was one of the first non trivial games where machines were...
Shannon made significant contributions to the field of chess progra...
![](https://i.imgur.com/y7GligR.jpg) *Claude Shannon and the These...
![](https://cdn.technologyreview.com/i/images/128024.jpg?sw=700&cx=...
PROCEEDINGS
OF
THE
I.R.E.
It
will
be
seen
that
the
effect
of
expressing
approval
when
a
particular
figure
has
been
printed
is
to
increase
the
chance
of
that
figure
being
printed
on
subsequent
occasions,
and
so,
if
done
sufficient
times,
to
give
the
appearance
of
a
conditioned
reflex
action
having
been
established.
GENERALIZED
LEARNING
PROGRAMS
The
construction
of
a
learning
program
of
the
above
type
presents
no
difficulty
to
anyone
who
is
familiar
with
the
technique
of
programming.
However,
such
a
pro-
gram
makes
it
possible
to
teach
the
machine
only
those
things
which
the
programmer
had
in
mind
when
he
wrote
the
program.
For
example,
the
program
just
described
would
not
enable
the
operator
to
teach
the
machine
to
print
different
figures
after
alternate
stimuli.
If
the
pro-
grammer
had
wished
to
do
so,
he
could
have
allowed
for
this
possibility
in his
program.
In
fact,
at
the
expense
of
making
the
program
longer
and
more
complicated,
he
could
include
any
number
of
extra
features,
but
obvi-
ously
he
could
not
think
of
every
possible
experiment
which
anyone
might
wish
to
try
on
the
machine.
All
the
programmer
is
doing,
in
fact,
is
to
program
the
action
of
the
machine
as
it
were
at
one
remove;
when
he
writes
the
program
he
visualizes,
if
not
in
complete
de-
tail,
at
any
rate
in
general
terms,
all
the
possible
actions
which
the
machine
can
take
in
response
to
legitimate
ac-
tions
on
the
part
of
the
operator.
Such
programms
are
not,
therefore,
as
interesting
as
at
first
sight
might
appear
from
the
point
of
view
of
this
article.
What
is
wanted
is
a
"generalized"
learning
pro-
gram,
which
would
enable
an
operator
to
teach
the
ma-
chine
anything
he
chose,
whether
the
idea
of
his
doing
so
had
occurred
to
the
programmer
or
not.
I
believe
that
such
a
program
would
not
be
a
mere
elaboration
of
the
simple
learning
programs
which
have
been
constructed
up
to
date
but
would
need
to
be
based
on
some
entirely
new
ideas.
Presumably
the
program
would
modify
and
extend
itself
as
the
learning
process
went
on.
As
I
have
pointed
out,
existing
machines
contain
the
means
for
this
extension;
the
difficulty
is
to
construct
a
program
to
make
use
of
them.
If
such
a
program
could
be
made
then
it
would
be
possible
to
teach
the
machine
in
much
the
same
way
as
a
child
is
taught.
Whether
the
new
ideas
I
have
referred
to
will
be
forth-
coming,
it is
hard
to
say.
Certainly,
for
the
present,
progress
appears
to
be
held
up.
Perhaps
this
will
give
comfort
to
those
who
cannot
bear
the
idea
of
machines
thinking;
on
the
other
hand
it
may
stimulate
others
to
further
effort.
BIBLIOGRAPHY
E.
C.
Berkeley,
"Giant
Brains,"
John
Wiley
&
Sons,
New
York,
N.
Y.;
1949.
A.
G.
Oettinger,
"Programming
a
digital
computer
to
learn,"
Phil.
Mag.,
vol.
7,
no.
43,
pp.
1243-1263;
1952.
C.
E.
Shannon,
"Programming
a
computer
for
playing
chess,"
Ibid.,
vol.
7,
no.
41,
pp.
256-275;
1950.
A.
M.
Turing,
"Computing
machinery
and
intelligence,"
Mind,
vol.
59,
pp.
433-460;
1950.
M.
V.
Wilkes,
"Can
machines
think?"
The
Spectator,
no.
6424,
pp.
177-178;
1951.
"Automatic
calculating
machines,"
Jour.
Roy.
Soc.
A.,
vol.
100,
pp.
56-90;
1951.
M.
V.
Wilkes,
D.
J.
Wheeler,
and
S.
Gill,
"The
Preparation
of
Pro-
grams
for
an
Electronic
Digital
Computer,"
Addison-Wesley
Cambridge,
Mass.;
1951.
Computers
and
Automata*
CLAUDE
E.
SHANNONt,
FELLOW,
IRE
Summary-This
paper
reviews
briefly
some
of
the
recent
de-
velopments
in
the
field
of
automata
and
nonnumerical
computation.
A
number
of
typical
machines
are
described,
including
logic
ma-
chines,
game-playing
machines
and
learning
machines.
Some
theo-
retical
questions
and
developments
are
discussed,
such
as
a
com-
parison
of
computers
and
the
brain,
Turing's
formulation
of
comput-
ing
machines
and
von
Neumann's
models
of
self-reproducing
ma-
chines.
*
Decimal
classification:
621.385.2.
Original
manuscript
received
by
the
Institute,
July
17,
1953.
t
Bell
Telephone
Laboratories,
Murray
Hill,
N.
J.
INTRODUCTION
SOAMUEL
BUTLER,
in
1871,
completed
the
manu-
%
script
of
a
most
engaging
social
satire,
Erewhon.
Three
chapters
of
Erewhon,
originally
appearing
under
the
title
"Darwin
Among
the
Machines,"
are
a
witty
parody
of
The
Origin
of
Species.
In
the
topsy-
turvy
logic
of
satirical
writing,
Butler
sees
machines
as'
gradually
evolving
into
higher
forms.
He
considers
the
classification
of
machines
into
genera,
species
and
vari-
C.
E.
Shannon
first
became
known
for
a paper
in
which
he
applied
Boolean
Algebra
to
relay
switching
circuits;
this
laid
the
foundation
for
the
present
extensive
application
of
Boolean
Algebra
to
computer
design.
Dr.
Shannon,
who
is
engaged
in
mathematical
research
at
Bell
Telephone
Laboratories,
is
an
authority
on
information
theory.
More
recently
he
received
wide
notice
for
his
ingenious
maze-solving
mechanical
mouse,
and
he
is
well-known
as
one
of
the
leading
explorers
into
the
exciting,
but
uncharted
world
of
new
ideas
in
the
computer
field.
The
Editors
asked
Dr.
Shannon
to
write
a
paper
describing
current
experiments,
and
specula-
tions
concerning
future
developments
in
computer
logic.
Here
is
a
real
challenge
for
those
in
search
of
a
field
where
creative
ability,
imagination,
and
curiosity
will
undoubtedly
lead
to
major
advances
in
human
knowledge.-The
Editor
1234
October
Shannon:
Computers
and
Automata
eties,
their
feeding
habits,
their
rudimentary
sense
or-
gans,
their
reproductive
and
evolutionary
mechanisms
(inefficient
machines
force
men
to
design
more
efficient
ones),
tendencies
toward
reversion,
vestigial
organs,
and
even
the
problem
of
free
will
in
machines.
Rereading
Erewhon
today
one
finds
"The
Book
of
the
Machines"
disturbingly
prophetic.
Current
and
pro-
jected
computers
and
control
systems
are
indeed
as-
suming
more
and
more
the
capacities
and
functions
of
animals
and
man,
to
a
far
greater
degree,
in
fact,
than
was
envisaged
by
Butler.
The
bread-and-butter
work
of
large-scale
computers
has
been
the
solution
of
involved
numerical
problems.
To
many
of
us,
however,
the
most
exciting
potentialities
of
computers
lie
in
their
ability
to
perform
non-numer-
ical
operations-to
work
with
logic,
translate
lan-
guages,
design
circuits,
play
games,
co-ordinate
sensory
and
manipulative
devices
and,
generally,
assume
com-
plicated
functions
associated
with
the
human
brain.
Non-numerical
computation
is
by
no
means
an
un-
proven
offspring
of
the
more
publicized
arithmetic
cal-
culation.
The
shoe
is
rather
on
the
other
foot.
A
hun-
dred
years
ago
Charles
Babbage
was
inspired
in
the
design
of
his
remarkably
prescient
analytical
engine
by
a
portrait
woven
in
silk
on
a
card
controlled
Jacquard
loom-a
device
then
in
existence
half
a
century.
The
largest
and
most
reliable
current
information
processing
machine
is
still
the
automatic
telephone
system.
Our
factories
are
filled
with
ingenious
and
unsung
devices
performing
almost
incredible
feats
of
sensing,
processing
and
transporting
materials
in
all
shapes
and
forms.
Rail-
way
and
power
systems
have
elaborate
control
and
pro-
tective
networks
against
accidents
and
human
errors.
These,
however,
are
all
special-purpose
automata.
A
significant
new
concept
in
non-numerical
computation
is
the
idea
of
a
general-purpose
programmed
computer-
a
device
capable
of
carrying
out
a
long
sequence
of
elementary
orders
analogous
to
those
of
a
numerical
computer.
The
elementary
orders,
however,
will
relate
not
to
operations
on
numbers
but
to
physical
motions,
operations
with
words,
equations,
incoming
sensory
data,
or
almost
any
physical
or
conceptual
entities.
This
paper
reviews
briefly
some
of
the
research
in
non-
numerical
computation
and
discusses
certain
of
the
problems
involved.
The
field
is
currently
very
active
and
in
a
short
paper
only
a
few
sample
developments
can
be
mentioned.
THE
BRAIN
AND
COMPUTERS
The
brain
has
often
been
compared,
perhaps
over-
enthusiastically,
with
computing
machines.
It
contains
roughly
1010
active
elements
called
neurons.
Because
of
the
all
or
none
law
of
nervous
action,
neurons
bear
some
functional
resemblance
to
our
binary
computer
elements,
relays,
vacuum
tubes
or
transistors.
The
num-
ber
of
elements
is
six
orders
of
magnitude
greater
than
our
largest
computers.
McCullough
has
picturesquely
put
it
that
a
computer
with
as
many
tubes
as
a
man
has
neurons
would
require
the
Empire
State
building
to
house
it,
Niagara
Falls
to
power
it
and
the
Niagara
river
to
cool
it.
The
use
of
transistors
in
such
a
com-
parison
would
improve
the
figures
considerably,
power
requirements
coming
down
to
the
hundreds
of
kilowatt
range
(the
brain
dissipates
some
25
watts)
and
size
re-
quirements
(with
close
packing)
comparable
to
an
ordi-
nary
dwelling.
It
may
also
be
argued
that
the
increased
speed
of
electronic
components
by
a
factor
of,
say,
10'
might
be
partially
exchangeable
against
equipment
re-
quirements.
Comparisons
of
this
sort
should
be
taken
well
salted
-our
understanding
of
brain
functioning
is
still,
in
spite
of
a
great
deal
of
important
and
illuminating
research,
very
primitive.
Whether,
for
example,
the
neuron
itself
is
the
proper
level
for
a
functional
analysis
is
still
an
open
question.
The
random
structure
at
the
neural
level
in
number,
placement
and
interconnections
of
the
neu-
rons,
suggests
that
only
the
statistics
are
important
at
this
stage,
and,
consequently,
that
one
might
average
over
local
structure
and
functioning
before
constructing
a
mathematical
model.
The
similarities
between
the
brain
and
computers
have
often
been
pointed
out.
The
differences
are
per-
haps
more
illuminating,
for
they
may
suggest
the
im-
portant
features
missing
from
our
best
current
brain
models.
Among
the
most
important
of
these
are:
1.
Differences
in
size.
Six
orders
of
magnitude
in
the
number
of
components
takes
us
so
far
from
our
ordinary
experience
as
to
make
extrapolation
of
function
next
to
meaningless.
2.
Differences
in
structural
organization.
The
appar-
ently
random
local
structure
of
nerve
networks
is
vastly
different
from
the
precise
wiring
of
artificial
automata,
where
a
single
wrong
connection
may
cause
malfunctioning.
The
brain
somehow
is
de-
signed
so
that
overall
functioning
does
not
depend
on
the
exact
structure
in
the
small.
3.
Differences
in
reliability
organization.
The
brain
can
operate
reliably
for
decades
without
really
seri-
ous
malfunctioning
(comparable
to
the
meaning-
less
gibberish
produced
by
a
computer
in
trouble
conditions)
even
though
the
components
are
prob-
ably
individually
no
more
reliable
than
those
used
in
computers.
4.
Differences
in
logical
organization.
The
differences
here
seem
so
great
as
to
defy
enumeration.
The
brain
is
largely
self-organizing.
It
can
adapt
to
an
enormous
variety
of
situations
tolerably
well.
It
has
remarkable
memory
classification
and
access
features,
the
ability
to
rapidly
locate
stored
data
via
numerous
"coordinate
systems."
It
can
set
up
stable
servo
systems
involving
complex
relations
between
its
sensory
inputs
and
motor
outputs,
with
great
facility.
In
contrast,
our
digital
com-
puters
look
like
idiot
savants.
For
long
chains
of
arithmetic
operations
a
digital
computer
runs
cir-
cles
around
the
best
humans.
When
we
try
to
pro-
gram
computers
for
other
activities
their
entire
organization
seems
clumsy
and
inappropriate.
1235
1953
PROCEEDINGS
OF
THE
I.R.E.
5.
Differences
in
input-output
equipment.
The
brain
is
equipped
with
beautifully
designed
input
organs,
particularly
the
ear
and
the
eye,
for
sensing
the
state
of
its
environment.
Our
best
artificial
coun-
terparts,
such
as
Shepard's
Analyzing
Reader
for
recognizing
and
transcribing
type,
and
the
"Audrey"
speech
recognition
system
which
can
recognize
the
speech
sounds
for
the
ten
digits
seem
pathetic
by
comparison.
On
the
output
end,
the
brain
controls
hundreds
of
muscles
and
glands.
The
two
arms
and
hands
have
some
sixty
inde-
pendent
degrees
of
freedom.
Compare
this
with
the
manipulative
ability
of
the
digitally
controlled
milling
machine
developed
at
M.I.T.,
which
can
move
its
work
in
but
three
co-ordinates.
Most
of
our
computers,
indeed,
have
no
significant
sensory
or
manipulative
contact
with
the
real
world
but
operate
only
in
an
abstract
environment
of
num-
bers
and
operations
on
numbers.
TURING
MACHINES
The
basic
mathematical
theory
of
digital
computers
was
developed
by
A.
M.
Turing
in
1936
in
a
classic
paper
"On
Computable
Numbers
with
an
Application
to
the
Entscheidungsproblem."
He
defined
a
class
of
computing
machines,
now
called
Turing
machines,
con-
sisting
basically
of
an
infinite
paper
tape
and
a
comput-
ing
element.
The
computing
element
has
a
finite
number
of
internal
states
and
is
capable
of
reading
from
and
writing
on
one
cell
of
the
tape
and
of
moving
it
one
cell
to
the
right
or
left.
At
a
given
time,
the
computing
ele-
ment
will
be
in
a
certain
state
and
reading
what
is
writ-
ten
in
a
particular
cell
of
the
tape.
The
next
operation
will
be
determined
by
the
current
state
and
the
symbol
being
read.
This
operation
will
consist
of
assuming
a
new
state
and
either
writing
a
new
symbol
(in
place
of
the
one
currently
read)
or
moving
to
the
right
or
to
the
left.
It
is
possible
for
machines
of
this
type
to
compute
numbers
by
setting
up
a
suitable
code
for
interpreting
the
symbols.
For
example,
in
Turing's
formulation
the
machines
print
final
answers
in
binary
notation
on
al-
ternate
cells
of
the
tape,
using
the
other
cells
for
inter-
mediate
calculations.
It
can
be
shown
that
such
machines
form
an
ex-
tremely
broad
class
of
computers.
All
ordinary
digital
computers
which
do
not
contain
a
random
or
probabil-
istic
element
are
equivalent
to
some
Turing
machine.
Any
number
that
can
be
computed
on
these
machines,
or
in
fact
by
any
ordinary
computing
process,
can
be
computed
by
a
suitable
Turing
machine.
There
are,
however,
as
Turing
showed,
certain
problems
that
can-
not
be
solved
and
certain
numbers
that
cannot
be
com-
puted
by
any
Turing
machine.
For
example,
it
is
not
possible
to
construct
a
Turing
machine
which,
given
a
suitably
coded
description
of
another
Turing
machine,
can
always
tell
whether
or
not
the
second
Turing
ma-
chine
will
continue
indefinitely
to
print
symbols
in
the
squares
corresponding
to
the
final
answer.
It
may,
at
a
certain
point
in
the
calculation,
relapse
into
an
infinite
intermediate
computation.
The
existence
of
mechan-
ically
unsolvable
problems
of
this
sort
is
of
great
interest
to
logicians.
Turing
also
developed
the
interesting
concept
of
a
universal
Turing
machine.
This
is
a
machine
with
the
property
that
if
a
suitably
coded
description
of
any
Tur-
ing
machine
is
printed
on
its
tape,
and
the
machine
started
at
a
suitable
point
and
in
a
suitable
state,
it
will
then
act
like
the
machine
described,
that
is,
compute
(normally
at
a
much
slower
rate)
the
same
number
that
the
described
machine
would
compute.
Turing
showed
that
such
universal
machines
can
be
designed.
They
of
course
are
capable
of
computing
any
computable
num-
ber.
Most
digital
computers,
provided
they
have
ac-
cess
to
an
unlimited
memory
of
some
sort,
are
equiva-
lent
to
universal
Turing
machines
and
can,
in
principle,
imitate
any
other
computing
machine
and
compute
any
computable
number.
The
work
of
Turing
has
been
generalized
and
reformu-
lated
in
various
ways.
One
interesting
generalization
is
the
notion
of
A
computability.
This
relates
to
a
class
of
Turing
type
machines
which
have
the
further
feature
that
they
can,
at
certain
points
of
the
calculation,
ask
questions
of
a
second
"oracular"
device,
and
use
the
answers
in
further
calculations.
The
oracular
machine
may
for
example
have
answers
to
some
of
the
unsolvable
problems
of
ordinary
Turing
machines,
and
conse-
quently
enable
the
solution
of
a
larger
class
of
problems.
LOGIC
MACHINES
Boolean
algebra
can
be
used
as
a
mathematical
tool
for
studying
the
properties
of
relay
and
switching
cir-
cuits.
Conversely,
it
is
possible
to
solve
problems
of
Boolean
algebra
and
formal
logic
by
means
of
simple
relay
circuits.
This
possibility
has
been
exploited
in
a
number
of
logic
machlines.
A
typical
machine
of
this
kind,
described
by
McCallum
and
Smith,
can
handle
logical
relations
involving
up
to
seven
classes
or
truth
variables.
The
required
relations
among
these
variables,
given
by
the
logical
problem
at
hand,
are
plugged
into
the
machine
by
means
of
a
number
of
"connective
boxes."
These
connective
boxes
are
of
six
types
and
provide
for
the
logical
connectives
"not,"
"and,"
"or,"
"'or
else,"
"if
and
only
if,"
and
"if-then."
When
the
con-
nections
are
complete,
starting
the
machine
causes
it
to
hunt
through
the
27
=
128
combinaticns
of
the
basic
variables,
stopping
at
all
combinations
which
satisfy
the
constraints.
The
machine
also
indicates
the
number
of
"true"
variables
in
each
of
these
states.
McCallum
and
Smith
give
the
following
typical
problem
that
may
be
solved
on
the
machine:
It
is
known
that
salesmen
always
tell
the
truth
and
engi-
neers
always
tell
lies.
G
and
E
are
salesmen.
C
states
that
D
is
an
engineer.
A
declares
that
B
affirms
that
C
asserts
that
D
says
that
E
insists
that
F
denies
that
G
is
a
sales-
man.
If
A
is
an
engineer,
how
many
engineers
are
there?
A
very
suggestive
feature
in
this
machine
is
a
selec-
1236
October
Shannon:
Computers
and
Automata
tive
feedback
system
for
hunting
for
particular
solutions
of
the
logical
equations
without
an
exhaustive
search
through
all
possible
combinations.
This
is
achieved
by
elements
which
sense
whether
or
not
a
particular
logi-
cal
relation
is
satisfied.
If
not,
the
truth
variables
in-
volved
in
this
relation
are
caused
to
oscillate
between
their
two
possible
values.
Thus,
variables
appearing
in
unsatisfied
relations
are
continually
changing,
while
those
appearing
only
in
satisfied
relations
do
not
change.
If
ever
all
relations
are
simultaneously
satisfied
the
machine
stops
at
that
particular
solution.
Changing
only
the
variables
in
unsatisfied
relations
tends,
in
a
general
way,
to
lead
to
a
solution
more
rapidly
than
methodical
exhaustion
of
all
cases,
but,
as
is
usually
the
case
when
feedback
is
introduced,
leads
to
the
pos-
sibility
of
continual
oscillation.
McCallum
and
Smith
point
out
the
desirability
of
making
the
changes
of
the
variables
due
to
the
feedback
unbalance
as
random
as
possible,
to
enable
the
machine
to
escape
from
periodic
paths
through
various
states
of
the
relays.
GAME
PLAYING
MACHINES
The
problem
of
designing
game-playing
machines
is
fascinating
and
has
received
a
good
deal
of
attention.
The
rules
of
a
game
provide
a
sharply
limited
environ-
ment
in
which
a
machine
may
operate,
with
a
clearly
defined
goal
for
its
activities.
The
discrete
nature
of
most
games
matches
well
the
digital
computing
tech-
niques
available
without
the
cumbersome
analog-digital
conversion
necessary
in
translating
our
physical
en-
vironment
in
the
case
of
manipulating
and
sensing
machines.
Game
playing
machines
may
be
roughly
classified
into
types
in
order
of
increasing
sophistication:
1.
Dictionary-type
machines.
Here
the
proper
move
of
the
machine
is
decided
in
advance
for
each
pos-
sible
situation
that
may
arise
in
the
game
and
listed
in'
a
"dictionary"
or
function
table.
When
a
particular
position
arises,
the
machine
merely
looks
up
the
move
in
the
dictionary.
Because
of
the
extravagant
memory
requirements,
this
rather
uninteresting
method
is
only
feasible
for
excep-
tionally
simple
games,
e.g.,
tic-tac-toe.
2.
Machines
using
rigorously
correct
playing
for-
mulas.
In
some
games,
such
as
Nim,
a
complete
mathematical
theory
is
known,
whereby
it
is
pos-
sible
to
compute
by
a
relatively
simple
formula,
in
any
position
that
can
be
won,
a
suitable
winning
move.
A
mechanization
of
this
formula
provides
a
perfect
game
player
for
such
games.
3.
Machines
applying
general
principles
of
approx-
imate
validity.
In
most
games
of
interest
to
hu-
mans,
no
simple
exact
solution
is
known,
but
there
are
various
general
principles
of
play
which
hold
in
the
majority
of
positions.
This
is
true
of
such
games
as
checkers,
chess,
bridge,
poker
and
the
like.
Machines
may
be
designed
applying
such
general
principles
to
the
position
at
hand.
Since
the
principles
are
not
infallible,
neither
are
the
machines,
as
indeed,
neither
are
humans.
4.
Learning
machines.
Here
the
machine
is
given
only
the
rules
of
the
game
and
perhaps
an
elementary
strategy
of
play,
together
with
some
method
of
improving
this
strategy
through
experience.
Among
the
many
methods
that
have
been
sug-
gested
for
incorporation
of
learning
we
have:
a)
trial-and-error
with
retention
of
successful
and
elimination
of
unsuccessful
possibilities;
b)
imitation
of
a
more
successful
opponent;
c)
"teaching"
by
approval
or
disapproval,
or
by
informing
the
machine
of
the
nature
of
its
mis-
takes;
and
finally
d)
self-analysis
by
the
machine
of
its
mistakes
in
an
attempt
to
devise
general
principles.
Many
examples
of
the
first
two
types
have
been
con-
structed
and
a
few
of
the
third.
The
fourth
type,
learn-
ing
game-players,
is
reminiscent
of
Mark
Twain's
com-
ment
on
the
weather.
Here
is
a
real
challenge
for
the
programmer
and
machine
designer.
Two
exanmples
of
the
third
category,
machines
ap-
plying
general
principles,
may
be
of
interest.
The
first
of
these
is
a
machine
designed
by
E.
F.
Moore
and
the
writer
for
playing
a
commercial
board
game
known
as
Hex.
This
game
is
played
on
a
board
laid
out
in
a
regular
hexagon
pattern,
the
two
players
alternately
placing
black
and
white
pieces
in
unoccupied
hexagons.
The
entire
board
forms
a
rhombus
and
Black's
goal
is
to
connect
the
top
and
bottom
of
this
rhombus
with
a
continuous
chain
of
black
pieces.
White's
goal
is
to
con-
nect
the
two
sides
of
the
rhombus
with
a
chain
of
white
pieces.
After
a
study
of
this
game,
it
was
conjectured
that
a
reasonably
good
move
could
be
made
by
the
fol-
lowing
process.
A
two-dimensional
potential
field
is
set
up
corresponding
to
the
playing
board,
with
white
pieces
as
positive
charges
and
black
pieces
as
negative
charges.
The
top
and
bottom
of
the
board
are
negative
and
the
two
sides
positive.
The
move
to
be
made
cor-
responds
to
a
certain
specified
saddle
point
in
this
field.
To
test
this
strategy,
an
analog
device
was
constructed,
consisting
of
a
resistance
network
and
gadgetry
to
lo-
cate
the
saddle
points.
The
general
principle,
with
some
improvements
suggested
by
experience,
proved
to
be
reasonably
sound.
With
first
move,
the
machine
won
about
seventy
per
cent
of
its
games
against
human
op-
ponents.
It
frequently
surprised
its
designers
by
choos-
ing
odd-looking
moves
which,
on
analysis,
proved
sound.
We
normally
think
of
computers
as
expert
at
long
in-
volved
calculations
and
poor
in
generalized
value
judg-
ments.
Paradoxically,
the
positional
judgment
of
this
machine
was
good;
its
chief
weakness
was
in
end-game
combinatorial
play.
It
is
also
curious
that
the
Hex-player
reversed
the
usual
computing
procedure
in
that
it
solved
a
basically
digital
problem
by
an
anlog
machine.
The
game
of
checkers
has
recently
been
programmed
into
a
general-purpose
computer,
using
a
"general
prin-
ciple"
approach.
C.
S.
Strachey
used
a
method
similar
to
1953
1237
PROCEEDINGS
OF
THE
I.R.E.
one
proposed
by
the
writer
for
programming
chess-an
investigation
of
the
possible
variations
for
a
few
moves
and
a
minimax
evaluation
applied
to
the
resulting
posi-
tions.
The
following
is
a
sample
game
played
by
the
checker
program
with
notes
by
Strachey.
(The
white
squares
are
numbered
consecutively,
0-31,
from
left
to
right
and
top
to
bottom.
Numbers
in
parentheses
indi-
cate
captures.)
MACHINE
STRACHEY
11-15
23-18
7-11
21-17
8-12
20-16
a
12-21
(16)
25-16
(21)
9-14
!
b
18-
9
(14)
6-20
(16,
9)
c
27-23
2-
7
d
23-18
5-
8
18-14
8-13
e
17-
8
(13)
4-13
(8)
14-
9
1-Sf
9-
6
15-19
6-
1
(K)
5-9
1-6?g
0-
5
1
h
6-15
(10)
11-25
(22,
15)
30-21
(25)
13-17
21-14
(17)
9-18
(14)
24-21
18-23
26-22
23-27
22-17
5-8
i
17-14
8-13
14-
9
19-23
9-
6
23-26
j
31-22
(26)
27-31
(K)
6-
2
(K)
7-10
2-
7
10-15
21-16
?k
3-10
(7)
16-
9
(13)
10-14
9-
6
15-19
6-
2
(K)
31-27
m
2-
6
27-31
m
6-10
31-26
n
10-17
(14)
19-23
29-25
26-31
p
Notes:
a)
An
experiment
on
my
part-the
only
deliberate
offer
I
made.
I
thought,
wrongly,
that
it
was
quite
safe.
b)
Not
foreseen
by
me.
c)
Better
than
5-21
(9,
17).
d)
A
random
move
(zero
value).
Shows
the
lack
of
a
constructive
plan.
e)
Another
random
move
of
zero
value.
Actually
rather
good.
f)
Bad.
Ultimately
allows
me
to
make
a
King.
10-14
would
would
have
been
better.
g)
A
bad
slip
on
my
part.
h)
Taking
full
advantage
of
my
slip.
i)
Bad,
unblocks
the
way
to
a
King.
j)
Sacrifice
in
order
to
get
a
King
(not
to
stop
me
Kinging).
A
good
move,
but
not
possible
before
19-23
had
been
made
by
chance.
k)
Another
bad
slip
on
my
part.
m)
Purposeless.
The
strategy
is
failing
badly
in
the
end
game.
n)
Too
late.
p)
Futile.
The
game
was
stopped
at
this
point
as
the
outcome
was
obvious.
While
obviously
no
world
champion,
the
machine
is
certainly
better
than
many
humans.
Strachey
points
out
various
weaknesses
in
the
program,
particularly
in
certain
end-game
positions,
and
suggests
possible
im-
provements.
LEARNING
MACHINES
The
concept
of
learning,
like
those
of
thinking,
con-
sciousness
and
other
psychological
terms,
is
difficult
to
define
precisely
in
a
way
acceptable
to
the
various
inter-
ested
parties.
A
rough
formulation
might
be
framed
somewhat
as
follows.
Suppose
that
an
organism
or
a
ma-
chine
can
be
placed
in,
or
connected
to,
a
class
of
en-
vironments,
and
that
there
is
a
measure
of
"success"
or
"adaptation"
to
the
environment.
Suppose
further
that
this
measure
is
comparatively
local
in
time,
that
is,
that
one
can
measure
the
success
over
periods
of
time
short
compared
to
the
life
of
the
organism.
If
this
local
meas-
ure
of
success
tends
to
improve
with
the
passage
of
time,
for
the
class
of
environments
in
question,
we
may
say
that
the
organism
or
machine
is
learning
to
adapt
to
these
environments
relative
to
the
measure
of
success
chosen.
Learning
achieves
a
quantitative
significance
in
terms
of
the
broadness
and
complexity
of
the
class
of
environments
to
which
the
machine
can
adapt.
A
chess
playing
machine
whose
frequency
of
wins
increases
dur-
ing
its
operating
life
may
be
said
by
this
definition
to
be
learning
chess,
the
class
of
environments
being
the
chess
players
who
oppose
it,
and
the
adaptation
meas-
sure,
the
winning
of
games.
A
number
of
attempts
have
been
made
to
construct
simple
learning
machines.
The
writer
constructed
a
maze-solving
device
in
which
an
arbitrary
maze
can
be
set
up
in
a
five-by-five
array
of
squares,
by
placing
partitions
as
desired
between
adjacent
squares.
A
per-
manently
magnetized
"mouse,"
placed
in
the
maze,
blunders
about
by
a
trial
and
error
procedure,
striking
various
partitions
and
entering
blind
alleys
until
it
eventually
finds
its
way
to
the
"food
box."
Placed
in
the
maze
a
second
time,
it
will
move
directly
to
the
food
box
from
any
part
of
the
maze
that
it
has
visited
in
its
first
exploration,
without
errors
or
false
moves.
Placed
in
other
parts
of
the
maze,
it
will
blunder
about
until
it
reaches
a
previously
explored
part
and
from
there
go
directly
to
the
goal.
Meanwhile
it
will
have
added
the
information
about
this
part
of
the
maze
to
its
memory,
and
if
placed
at
the
same
point
again
will
go
directly
to
the
goal.
Thus
by
placing
it
in
the
various
unexplored
parts
of
the
maze,
it
eventually
builds
up
a
complete
pattern
of
information
and
is
able
to
reach
the
goal
di-
rectly
from
any
point.
If
the
maze
is
now
changed,
the
mouse
first
tries
the
old
path,
but
on
striking
a
partition
starts
trying
other
directions
and
revising
its
memory
until
it
eventually
reaches
the
goal
by
some
other
path.
Thus
it
is
able
to
forget
an
old
solution
when
the
problem
is
changed.
The
mouse
is
actually
driven
by
an
electromagnet
moving
beneath
the
maze.
The
motion
of
the
electro-
magnet
is
controlled
by
a
relay
circuit
containing
about
110
relays,
organized
into
a
memory
and
a
computing
circuit,
somewhat
after
that
of
a
digital
computer.
The
maze-solver
may
be
said
to
exhibit
at
a
very
primitive
level
the
abilities
to
(1)
solve
problems
by
trial
and
error,
(2)
repeat
the
solutions
without
the
errors,
(3)
add
and
correlate
new
information to
a
par-
tial
solution,
(4)
forget
a
solution
when
it
is
no
longer
applicable.
1238
October