Vernor Vinge, Mathematician; Computer Scientist; Author, True Names; The Coming Technological Singularity
Synopsis: Math professor for thirty years, and also science fiction writer, Vernor Vinge talks about the possible scenarios for a technological singularity, including whether we will get there without even knowing it.
"We might think, 'Gee, I don't remember that mountain range west of San Francisco'. Or that sand talking back to me. Humans are operating under illusison that they are self-aware. Adam Smith's invisible hand would really be manifest!"
He peppers his talk with frequent - too frequent for this live-blogger to capture - references to sci-fi novels and short stories that vividly illustrate the possibilities.
'Exponential' as in 'exponential growth' used in conversation all the time. The exponential growth of a baby last 17 years. Often exponential growth doesn't last. Sometimes comes with catastrophic collapse, or just saturation.
In 1968, there was plenty of electrical engineers that could have given you an estimate of power of computers in 2005. Very few of them would have been correct as to what they would have been used for. What is the killer app if growth continues?
A robotist at CMU has done some thinking on this. If you look at Moore's Law and you look at the raw hardware power of the human brain, there is a crossover point. What happens one Moore's Law generation after that? In physics, singularity also means there are unknowable things that are beyond us. We, as humans, are no longer the drivers of progress.
We could explain Mark Twain if he came back to this time we could catch him up with the modern world in an afternoon. We cannot do the same with the goldfish. [Uh, we're the goldfish guys, if you're trying to follow along.]
Profoundly important events like the printing press, architecture, fire are singularity points - but don't compare to the technological singularity - they lack the quality of unexplainability.
There is an analogy. Like the rise of humankind within the animal kingdom. Another author says, "Life is just the prologue to intelligence." I'm not sure if I agree with this. Another close analogy is the beginning of all life on Earth.
This sort of transcendence ...
Although there is something comforting in knowing that things get better and better and intelligence increases, what makes us nervous is thinking this will happen in our lifetime.
What if singularity doesn't happen? Raw hardware power not enough - perhaps we don't get software working. Perhaps there will be catastrophic violence - sometimes we are our own worst enemy. We might be in a Russian roulette type of situation: ah, that didn't hurt - click - ah, that didn't hurt too much - click.
Sir Martin Rees, "Our Final Hour." Right now we might be the most important time in history. The singularity itself could be catastrophic.
His own view is that while the technological singularly is not a sure thing, it is the most likely non-catastrophic scenario on the horizon.
Artificial Intelligence (AI), the Internet Self waking up an being conscious (see Bruce Sterling's Maneki Neko for painting a picture of this world). More immediate: Intelligence amplification is happening now. And fine-grained distributed systems (embedded microprocessors are networked and even more ubiquitous) are bit further behind than Moore's Law (see Karl Schroeder, Ventus).
Don't see mention in this conference of the dangerous realms in intelligence amplification, such as direct neural hookups. Without worrying about sci-fi aspects of IA, important here-and-now promise of IA is in network mediated interactions of humans.
Before singularity, you don't have superhuman intelligence.
Soft takeoffs. The complete transition takes years and doesn't appear exponential. As you get late into the transition, the rate and what's happening is unintelligible to normal humans. Takes about twenty years, see Accelerando.
Hard takeoffs. There are no precursors. People still debating whether it is going to happen or not, and it comes about in less than 100 years. See Greg Bear, "Blood Music" (sandwich starts talking to him one morning). Hard takeoff is plausible: resembles an earthquake or snow avalanche. Even paleolithic humans can adapt to cold weather faster than buffalos.
My intuition is that hard takeoff would be a very bad thing. Trying for a soft takeoff would be a good thing. Intelligence amplification is a good way to go for that. If it really worked, there would be people around that would be keeping up with what's going on.
Q: (by Brad Templeton, EFF) They wouldn't have perception that hard takeoff is a bad thing. They're in charge.
A: Yes. History always ends happily.
Q: Would we even know. Possibility that we would not aware of this intelligence but things would be radically different.
A: We might think, gee, I don't remember that mountain range west of San Francisco. Or the sand talking to me. Humans are operating under illusison that they are self-aware. Adam Smith's invisible hand would really be manifest!
Brad: I'll sell you hard takeoff insurance!
Q: What if this happens in biology rather?
A: That would be transitory. Long run it's the other substrates [where singularity happens].
tags ac2005 singularity
It was a great talk by Dr. Vinge. As his focus was on a hard take-off for AI (or networked intelligence) I would add two science fiction stories that have addressed a "hard take-off" on intelligence amplification to complement his list:
Poul Anderson's 1954 novella "Brain Wave" is probably one of the best stories that addresses the impact on society of intelligence amplification. The thesis is that the solar system has been in what Vinge would describe as a "Slow Zone" and once it travels out of it human (and animal) intelligence triples to quadruples.
Society is left unrecognizably altered in a few years.
John Brunner's 1973 novel "The Stone that never came down" has scientists discover a drug that significantly increase intelligence and empathy. In this one society is substantially altered for the better: Brunner's drug also increases empathy such that most of the potentially negative impacts of amplified intelligence are mitigated. It's probably his single most optimistic book (certainly in comparision to "The Sheep Look Up" or even "Stand on Zanzibar").
Two other "hard take-off" AI stories that bear mentioning are Arthur C. Clarke's 1972 "Dial F for Frankenstein" (the telephone system reaches a complexity level that becomes intelligent) and Heinlein's 1966 "The Moon is a Harsh Mistress" (where Mycroft the AI has emerged in stealth and sides with the colonists).
Dial F for Frankenstein appears to be online here: http://www.cybered.co.th/eng/DIAL.html
Posted by: Sean Murphy | September 20, 2005 at 11:32