Science Fiction Project
Analog - All editorials
* * Back * *

YOU KNOW WHAT I MEAN... - John Wood Campbell

I've had an opportunity to learn a little about a project now under way at the Harvard Computer Lab; the men engaged in do not, probably, have the same opinions about it that I have formed. We'll find out later whether my hunches regarding are valid.
I have a feeling the job now started will snowball for the next century or so - and that they have started on the most important basic project Man has ever tackled.
They're studying the problem of teaching a computing machine to translate English to Russian, and Russian to English.
It's my belief that, in the process, they will solve about ninety per cent of Mankind's social, psychological, economic and political problems. The computers won't solve the problems - but they'll force the men working on them to solve them.
Reason: you can NOT say to a computer "You know what I mean...". The computer would only reply, "No. Define 'you' Define 'know' Define 'I' Define 'mean'. Operation-relational processes regarding these terms not available".
All right, friend - go ahead. Define "I". Define it in terms of function and relationship to the Universe. Define it in terms of characteristics of process and program the steps the computer to take in interacting this concept "I" with the operational program steps meant by the concept "know". Just do that one, sing little thing, just define that one pair of terms - and you'll resolve about seventy-five per cent of all human problems.
Korzybski was a piker. He tried to teach human beings, who have built-in automatic self-programming units. They may not be perfect, but they work with incredible efficiency.
Try teaching a computing machine what you mean by some nice, simple term like "food". There's a good, basic, simple idea - an item basic to the most elementary understandings of life processes, politics, sociology, psychology and economics. This is one that must be included, obviously.
Anthony Oettinger, one of the men working on the project, explained part of the problem very neatly and completely by telling of one phase of the difficulty. Suppose we take a common English saying, and translate it into Chinese. Now if translation were perfect, we should be able to retranslate to English and recover the original phrase. Actually, in one instance, the retranslation yields "invisible idiot". Guess what went in originally! It's a perfectly understandable result; after all, something that is invisible, is out of sight - and an idiot is one who is out of mind. It could equally have come out "hidden maniac" or "distant madman".
Translation cannot be done on a word basis; we don't use words, actually, but concepts. Translating word-by-word would be only slightly more rewarding than transliterating letter by letter. The Russian alphabet is different from ours; that doesn't mean that transliterating yields English. Neither does a word-for-word substitution, save in the simplest level of statement.
The Chinese-English saying translation above indicates the real difficulty - and one that General Semantics hasn't adequately recognized, I feel. Actually, in communicating with each other, we seek to communicate concepts; concepts are complex structures of many individual parts assembled in a precise relationship. If someone asked a chemist for sugar, and the chemist delivered a pile of carbon and two small flasks of hydrogen and oxygen - everything necessary for sugar is present, but it's not sugar.
Let's consider "food" a moment. Presumably we are seeking to achieve sane translations of sane human thinking from our computer. Under these conditions should we teach the machine to consider that human flesh is to be considered "food"?
Yes. A sane man must realize that his flesh is food - otherwise he would make the mistake of swimming in shark-infested waters, or ignore lions and other major carnivora.
Is wood "food"?
Yes; an engineer must realize that fact when he considers constructing buildings. Otherwise he would neglect the possibilities of termite damage. Is steel "food"?
We must so instruct the computer; otherwise it could not translate "We must have steel scrap to feed our hungry furnaces".
Very well, gentlemen, what do you mean when you consider the concepts in "foods" "feed" and "eat"? Define your terms!
The sociologists and psychologists have long maintained that mathematical methods are not applicable to human problems. Not until the terms in which human problems are discussed have been defined operationally, certainly.
Teaching a computing machine, a machine that will invariably do precisely, but only, what you did-in-fact instruct it to do will be a most humbling task. In the course of doing that job, I foresee the collapse of every human philosophy, the harsh winnowing of every human falsity, every slightest quibble, self-justification, or rationalization.
When a man is seeking to induce another man to agree with him, to learn his ideas, he can hold "he is stubborn; he refuses to understand me because he hates me". Or "He is too stupid to learn!".
When a man seeks to teach a computer...
Computers are not stubborn. If it is stupid, it is the failure of the man to perfect his handiwork, and the failure reflects inescapably to its source in Man. If it acts in a foggy, confused manner - Man made the mistake, and he must correct it. It's his mistake; responsibility cannot be assigned elsewhere.
Man, in trying to teach his tender and precious beliefs to a computing machine, is inviting the most appallingly frank and inescapable criticism conceivable. The computing machines won't solve human problems for us - but they'll force men to a degree of rigid self-honesty and humility that never existed before.
I can imagine some philosopher, some psychologist, or some physicist coming spluttering to the computer lab, demanding that the nonsensical answers so blatantly in disregard of the facts-as-he-believes them be corrected. "Out of the way; let someone who knows something about this field teach this machine a few realities!".
Three weeks later, a haggard and vastly humbled man would come out, his fine structure of beliefs in tatters - and possessed of a realization of his own need to learn a few real realities.
I have heard psychologists use the term "ego" the terms "id" and "identity". I've looked, with some interest, in an Encyclopedia of Psychology; there is no entry under any one of those terms, no effort, even, to define them.
Have you ever sought a definition of "distance" as used in physics? It's one of the three fundamentals of the CGS system - and has no definition whatever. Define your terms, the computer relentlessly demands. The mathematician has no definition for "quantity" or "distance" either. Cantor has proved mathematically that any fine segment has as many points - aleph null - as any other fine, however long or short, or as any plane. Then define what you mean by "greater than" or "less than"! Until you do, the whole structure rests on "You know what I mean..."
The computer does not "know what you mean". Define it!
A while back I ran a faulty "syllogism" going, essentially, "Biology holds no organism can live in a medium of its own products. Communism holds a man has a right to what he produces. Therefore, Communism won't work". It was thrown in as a deliberate inducement to thinking and questioning of terms. Most of those who answered - some quite angrily, incidentally! - held that the flaw lay in the misuse of the terms "products" and "produces".
There's a flaw all right - but that's not it. The computer would have spotted it immediately; only we humans have difficulty in finding it.
The products of an organism are quite artificially divided into "products" and "by-products" and "waste products". As industry long since learned, a waste product is something we haven't learned a use for yet, and a by-product is a misleading term. What is the product of Street & Smith Publications, Incorporated, for instance? Street & Smith, like the National Biscuit Company, assembles materials, packages them, and distributes them. Rumford Press, which prints this magazine, like the American Can Company, or Container Corporation, makes packages.
You hold in your hand a physical package, packed with word-structured concepts. You buy a thing of paper, ink, and metal and glue - just as you buy a thing of glass, metal and plastic when you buy a radio tube. In each case, the object is merely a package-structure for the function which you really desire.
Any organism will smother in any of its own products if present in excess; a waste product is one present either in excess of the usable amount, or one which is not usable.
Any organism - including the organism known as a "state" or "nation" - will smother in an excess of its own ill-regulated and ill-distributed products. The basic biological law is perfectly applicable to a state, or a society.
The flaw in the false syllogism is the one the computer would have spotted immediately.
"Define the term 'right'!".
This is the distributive term in the syllogism, and is so undefined as to be meaningless. The falsity of the syllogism is equivalent to that in "All men are human beings. Some human beings are mortal. Therefore all men are mortal". The flaw in that syllogism is the faulty distributive term in the second statement.
But when it comes to "right" human beings are very, very skittish indeed. They're too apt to find that some of their pet beliefs and personal preferences will be ruled out if they accept any hard, clean-cut definition of "right".
Since a machine has no rights to begin with, no beliefs, prejudices, preferences or foibles, it will most unkindly and uncompromisingly refuse to operate at all until you define what you mean by "right".
I have a deep conviction that a vastly humbled and chastened - but vastly improved! - humanity will result from the effort to teach a machine what Man believes.
The terribly tough part about it is that to do it, Man will, for the first time, have to find out exactly what he does believe - and make coherent, integrated sense of it!

August 1953

END