I recently finished the book Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. It’s a history of the quest to create superhuman artificial intelligence and a survey of how things may go when we finally manage it. The whole thing is surreally hilarious in that it takes a lot of absurd science-fiction scenarios very, very seriously. Like, what if we set up an AI at a factory to maximize paper-clip production and it ends up converting the entirety of the observable universe to paper clips? Whoooops. This is a real concern. The uncanny atmosphere is magnified in the the audio version, narrated by Napoleon Ryan, whose posh British supervillain performance makes you wonder if he might actually relish describing, say, the destruction, enslavement, or torture of quadrillions of simulated human minds per second. (Seriously, go ahead and check out the audio sample!)
The ideas, the scale, and the moral puzzles involved in this topic are boggling. AI seems to me to be the most significant philosophical concern humans have yet encountered. In some sections of the book, nearly every paragraph offers a possible near-future scenario involving the pointless doom or fundamental transcendence of the human race. It just tosses them out there one after another, each one a free premise for a whole series of apocalyptic or dystopian science fiction novels. Here are a bunch of scenarios that my own brain came up with while reading it.