AI disaster won’t look like the Terminator. It’ll be creepier.
By Dylan Matthews – When I heard five or so years back that people in Silicon Valley were getting worried about artificial intelligence causing human extinction, my initial reaction was extreme skepticism.
A large reason for that was that the scenario just felt silly. What did these folks think would happen — was some company going to build Skynet and manufacture Terminator robots to slaughter anyone who stood in their way? It felt like a sci-fi fantasy, not a real problem.
This is a misperception that frustrates a lot of AI researchers. Nate Soares, who runs the Machine Intelligence Research Institute, which focuses on AI safety, has argued a better analogy than the Terminator is the “Sorcerer’s Apprentice” scene in Fantasia. The problem isn’t that AI will suddenly decide we all need to die; the problem is that we might give it instructions that are vague or incomplete and that lead to the AI following our orders in ways we didn’t intend. Read more:
RB Note: Considering how easily influenced the huddled masses are already it would seem we are ripe for the picking. And possibly just foolish and arrogant enough to not notice until it was too late.
Comments
AI disaster won’t look like the Terminator. It’ll be creepier. — No Comments
HTML tags allowed in your comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>