somehow be built. Suppose further that programmers
did somehow come to have faith in its verifications. In
the absence of any real-world basis for such belief, it
would have to be blind faith, but no matter. Suppose
that the philosopher's stone had been found, that lead
could be changed to gold, and that programmers were
convinced of the merits of feeding their programs into
the gaping jaws of a verifier. It seems to us that the
scenario envisioned by the proponents of verification
goes something like this: The programmer inserts his
300-line input/output package into the verifier. Several
hours later, he returns. There is his 20,000-line verifica-
tion and the message "VERIFIED."
There is a tendency, as we begin to feel that a
structure is logically, provably right, to remove from it
whatever redundancies we originally built in because of
lack of understanding. Taken to its extreme, this ten-
dency brings on the so-called Titanic effect; when failure
does occur, it is massive and uncontrolled. To put it
another way, the severity with which a system fails is
directly proportional to the intensity of the designer's
belief that it cannot fail. Programs designed to be clean
and tidy merely so that they can be verified will be
particularly susceptible to the Titanic effect. Already we
see signs of this phenomenon. In their notes on Euclid
[16], a language designed for program verification, sev-
eral of the foremost verification adherents say, "Because
we expect all Euclid programs to be verified, we have
not made special provisions for exception handling ...
Runtime software errors should not occur in verified
programs." Errors should not occur? Shades of the ship
that shouldn't be sunk.
So, having for the moment suspended all rational
disbelief, let us suppose that the programmer gets the
message "VERIFIED." And let us suppose further that
the message does not result from a failure on the part of
the verifying system. What does the programmer know?
He knows that his program is formally, logically, prov-
ably, certifiably correct. He does not know, however, to
what extent it is reliable, dependable, trustworthy, safe;
he does not know within what limits it will work; he does
not know what happens when it exceeds those limits.
And yet he has that mystical stamp of approval: "VER-
IFIED." We can almost see the iceberg looming in the
background over the unsinkable ship.
Luckily, there is little reason to fear such a future.
Picture the same programmer returning to find the same
20,000 lines. What message would he really fred, sup-
posing that an automatic verifier could really be built?
Of course, the message would be "NOT VERIFIED."
The programmer would make a change, feed the pro-
gram in again, return again. "NOT VERIFIED." Again
he would make a change, again he would feed the
program to the verifier, again "NOT VERIFIED." A
program is a human artifact; a real-life program is a
complex human artifact; and any human artifact of
sufficient size and complexity is imperfect. The message
will never read "VERIFIED."
The Role of Continuity
We may say, roughly, that a mathematical idea is "significant" if
it can be connected, in a natural and illuminating way, with a large
complex of other mathematical ideas.
G.H. Hardy
The only really fetching defense ever offered for
verification is the scaling-up argument. As best we can
reproduce it, here is how it goes:
(1) Verification is now in its infancy. At the moment,
the largest tasks it can handle are verifications of
algorithms like FIND and model programs like
GCD. It will in time be able to tackle more and
more complicated algorithms and trickier and trick-
ier model programs. These verifications are com-
parable to mathematical proofs. They are read.
They generate the same kinds of interest and ex-
citement that theorems do. They are subject to the
ordinary social processes that work on mathemati-
cal reasoning, or on reasoning in any other disci-
pline, for that matter.
(2) Big production systems are made up of nothing
more than algorithms and model programs. Once
verified, algorithms and model programs can make
up large, workaday production systems, and the
(admittedly unreadable) verification of a big system
will be the sum of the many small, attractive, inter-
esting verifications of its components.
With (1) we have no quarrel. Actually, algorithms
were proved and the proofs read and discussed and
assimilated long before the invention of computers--and
with a striking lack of formal machinery. Our guess is
that the study of algorithms and model programs will
develop like any other mathematical activity, chiefly by
informal, social mechanisms, very little if at all by formal
mechanisms.
It is with (2) that we have our fundamental disagree-
ment. We argue that there is no continuity between the
world of FIND or GCD and the world of production
software, billing systems that write real bills, scheduling
systems that schedule real events, ticketing systems that
issue real tickets. And we argue that the world of pro-
duction software is itself discontinuous.
No programmer would agree that large production
systems are composed of nothing more than algorithms
and small programs. Patches, ad hoc constructions, ban-
daids and tourniquets, bells and whistles, glue, spit and
polish, signature code, blood-sweat-and-tears, and, of
course, the kitchen sink--the colorful jargon of the prac-
ticing programmer seems to be saying something about
the nature of the structures he works with; maybe theo-
reticians ought to be listening to him. It has been esti-
mated that more than half the code in any real produc-
tion system consists of user interfaces and error mes-
sages--ad hoc, informal structures that are by definition
unverifiable. Even the verifiers themselves sometimes
seem to realize the unverifiable nature of most real
software. C.A.R. Hoare has been quoted [9] as saying,
277
Communications May 1979
of Volume 22
the ACM Number 5