No one knows what it would do to a creative brain to think creatively continuously. Perhaps the brain, like the heart, must devote most of its time to rest between beats. But I doubt that this is true. I hope it is not, because [interactive computers] can give us our first look at unfettered thought.
J. C. R. Licklider, “Computers in the University,” in Computers and the World of the Future (1962)
Ah, but a man’s reach should exceed his grasp,
Or what’s a heaven for?
Robert Browning, “Andrea del Sarto”
“I’ve never been certain whether the moral of the Icarus story should only be, as is generally accepted, ‘don’t try to fly too high,’ or whether it might also be thought of as ‘forget the wax and feathers, and do a better job on the wings.”
Stanley Kubrick
Yes, it’s all about human intellect, that strange product of a decisive moment in our evolution in which we got just enough working memory to begin to generate counterfactuals, to imagine that things could be different, and then to invent symbols–language, of course, but also art, math, music–to be able to share those imaginings with each other. We even came up with a word for the concept of “symbol” itself, a way of talking about how we were thinking, and thus generated what Douglas Hofstadter calls “an infinitely extensible symbol set.” As we shared our symbols, we began to be intentional about what we could build together, and what might persist beyond our own deaths.
So much extraordinary capacity, and we take most of it for granted.
And now come computers, with the promise of helping us generate, store, retrieve, share, and more fully understand the rich symbols that form the record of our species.
In “Man-Computer Symbiosis,” Licklider talks about all of this symbol-use in pretty straightforward ways. The essay reads very much like a project outline, at times almost bureaucratically so. There’s the research (on himself), the conclusion, and thus the problem description. There’s the itemized analysis of what will be needed to realize the vision of intellectual augmentation he imagines. By the end of the essay, he’s outlined all the engineering and seems well on his way to putting a budget together. I’ve always found the end very abrupt, oddly so, given the highly metaphorical way the essay begins. It’s very different from Vannevar Bush’s poignant, almost plaintive ending to “As We May Think.”
The passage I’ve chosen exhibits both the project-oriented Licklider (he preferred to go by “Lick”) and the dreamer Lick. It’s interesting to see how they don’t quite go together:
In short, it seems worthwhile to avoid argument with (other) enthusiasts for artificial intelligence by conceding dominance in the distant future of cerebration to machines alone. There will nevertheless be a fairly long interim during which the main intellectual advances will be made by men and computers working together in intimate association. A multidisciplinary study group, examining future research and development problems of the Air Force, estimated that it would be 1980 before developments in artificial intelligence make it possible for machines alone to do much thinking or problem solving of military significance. That would leave, say, five years to develop man-computer symbiosis and 15 years to use it. The 15 may be 10 or 500, but those years should be intellectually the most creative and exciting in the history of mankind.
Lick simply assumes that AI will be developed, and that when it is developed, “dominance” in “cerebration” (meaning “thought,” I believe) will belong to “machines alone.” We will invent our own obsolescence. Yet the 15, or 10, or 500 years during which we invent our obsolescence “should be intellectually the most creative and exciting in the history of mankind.” The note of excitement is familiar and thrilling. And we are living in that time as I type these words on my computer’s keyboard, which makes Lick’s pronouncement doubly thrilling.
Yet I hesitate to say with Lick, “I, for one, welcome our new robot overlords.” Moreover, I don’t think he’s being very careful with his own argument. In the essay, he distinguishes “formulated” thinking from “formulative” thinking. The latter is more about problem-finding, about using the algorithmic powers of the computer in concert with the goal-setting and meaning-making activity of the human being to refine the human’s questions and enrich the scale and depth of the human’s powers of imagination and analysis. Does Lick believe that computers will eventually become superior meaning-makers? (Does the Netflix recommendation engine create meaning, or simply reveal it?) Does Lick believe that computers will identify problems for us to work on, optimizing the work for our messy associative brains? Does he believe that creativity itself will take on a new meaning independent of human input or judgment? Hard to say. I don’t think he’s consistent in the essay. And in truth, as John Markoff notes in What The Dormouse Said, the split between the AI researchers and those who, like Doug Engelbart, imagined that computers would augment human intellect, not replace it, was eventually unbridgeable.
And yet the dreams were similar, which brings me back to the epigraphs. From cave paintings to epic poetry, there’s strong evidence that ever since human beings became symbol users and symbol sharers (really two aspects of the same thing), we have found our minds to be spooky, paradoxical, oddly free, and strangely limited. And in the midst of that feeling, we aspire to greater heights of ingenuity and invention. It is our very minds that drive us to enlarge our minds, since somewhere in our minds we find we have not reached the end of what we can imagine grasping.
That’s a strange thought, a troubling thought, an exhilarating thought. Many cautionary tales have sprung up around this thought. Many dreams have emerged from it as well. Given the nature of our ingenuity, I’m not sure we have much hope of stalling this thought.
Might as well see what we can build with it.
As Pete Townshend once sang, “No easy way to be free.”
Great use of that Kubrick quote, Gardner.
You remind me of one interpretation of utopian thinking. The point isn’t so much to build a perfect society, so much as to help us think more creatively about making a better life.
Hi Gardner
I think we can see some of the challenges for the ai debate already.
I think we trust the truth of data we do not necessarily understand.
‘big data’ has its own gravitas which comes in part from its economic weight. the systems are designed and scoped in ways which shape the kind of message which can be pulled from the data and that shaping is largely done by the self interest of money(money as a person, well that is what companies as a person means? money with limited liability?). so the ai tools are functioning within a context that limits the vectors of inquiry of ai and also trusts the results because global systems have become too complex to understand and so we look for pattern and signal in massively complex systems which are deliberately imperfectly (!!) representing the exquisitely complex forms of interwoven life and ecology.
Engelbart’s collaboration is potetially true for people who take the time to understand the ai system – its flaws and opportunities, and who retain their own agency/authority in the collaboration.
That kind of human intelligence is not usual in the way that we interact with technology because the money that buy’s the systems shapes them, the people who make them may understand their limits and opportunities. The people who are using them are required to trust them as they are the infrastructure of the money we serve. I guess drones come to mind for me. There are layers of separation between the people making decisions about the tools and the people at the receiving end have definitely reached a point where an ai which can not be negotiated with is causing death.
Banking systems where loops of machine generated transactions cause collapse for communities.Intelligence systems like NSA or facebook which extrapolate threats or opportunities to sell or steer people.
In a society where few have agency of the systems and those systems appear to have very limited liability for their actions the AI already has become ungoverned?
Perhaps you need to have systems where people respect other people in order to have tools which have those values. Whether or not the tools have autonomy could be a grey line for a long time as money invests more trust in complex systems. What choice would an AI make if it has been built from this starting point.
How would it learn about bees, dung beetles, virii, seasons, symbiosis, climate, oceanic gyres of life and plastic. Why do we not value that complexity? It is the AI in the room which we already are governed by and which we choose to bend rather than to understand. imho