

Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds. Fifty percent think it will be developed by the middle of this century, and nearly all think it will be accomplished by century's end. He concluded that "the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control." How to maintain that control is the issue Bostrom tackles.Ībout 10 percent of AI researchers believe the first machine with human-level intelligence will arrive in the next 10 years. When intelligence applies technology to improving intelligence, he argued, the result would be a positive feedback loop-an intelligence explosion-in which self-improving intelligence bootstraps its way to superintelligence. Good observed that technology arises from the application of intelligence. In 1950, for example, the computing pioneer Alan Turing suggested creating a machine simulating a child's mind that could be educated to adult-level intelligence. Since the invention of the electronic computer in the mid-20th century, theorists have speculated about how to make a machine as intelligent as a human being.

Bostrom cogently argues that the prospect of superintelligent machines is "the most important and most daunting challenge humanity has ever faced." If we fail to meet this challenge, he concludes, malevolent or indifferent artificial intelligence (AI) will likely destroy us all. Should humanity sanction the creation of intelligent machines? That's the pressing issue at the heart of the Oxford philosopher Nick Bostrom's fascinating new book, Superintelligence. The penalty for violating the Orange Catholic Bible's commandment "Thou shalt not make a machine in the likeness of a human mind" was immediate death. Human computers called Mentats serve as a substitute for the outlawed technology. In Frank Herbert's Dune books, humanity has long banned the creation of "thinking machines." Ten thousand years earlier, their ancestors destroyed all such computers in a movement called the Butlerian Jihad, because they felt the machines controlled them. Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom, Oxford University Press, 324 pages, $29.95
