With artificial intelligence honed to the point that robotic cars may drive better than those with humans behind the wheel, and a computer has been crowned a Jeopardy champion, are mere biological brains in danger of becoming obsolete? One leading scientific mind says we need to stop and think before this technology gets out of control.
Tesla and SpaceX founder Elon Musk posted a provocative statement on Twitter over the weekend about the potentially catastrophic dangers of artificial intelligence, also known as A.I. The tweet was rapidly shared with thousands of others. After recommending a book on computer intelligence, Musk wrote: "We need to be super careful with AI. Potentially more dangerous than nukes."
That posting was soon followed by another pessimistic tweet about how we humans might fare against future machines that are smarter than we are: "Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable."
His concerns stand in contrast to many in the tech world who are optimistic about the promise of computers that can learn from their mistakes and "think" for themselves. The technology news site Mashable notes that that some experts "may use Musk's A.I. concerns -- which remain fantastical to many -- as proof that his predictions regarding electric cars and commercial space travel are the visions of someone who has seen too many science fiction films."
Musk has been on the cutting edge of technology since he sold his first bit of computer code at age 12. He made a fortune when he sold his first software company to Compaq for over $300 million in 1999 and another fortune when PayPal was sold to eBay three years later. Now, Musk runs two future-oriented companies: Tesla Motors, which produces and sells electric cars, and SpaceX, a leader in the commercial space flight business.
Musk has a financial stake in the future of A.I. as well. He invested in A.I. development companies DeepMind and Vicarious, hoping to keep tabs on the pace of commercial development. But he told CNBC in June 2014 that these investments were made "not from the standpoint of actually trying to make any investment return... I like to just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome there."