There is a trend going around the Internet about the dangers of Artificial Intelligence. This fear is nothing new. People have been talking about the dangers of creating AI for some time. We’ve seen it in popular media for decades, but it has gone on longer. In fact, if you allow this type of thing, the silent film, The Golem, is an early, 1920 example of an AI having run amok.
However, a couple of things are making the rise of AI an even more pressing issue for people.
First, over the past six decades, AI has grown from a fringe science into something that is pervasive in our world. Honestly, unless you #unplug point: http://www.becomingminimalist.com/unplug-please/ and counterpoint: http://www.newyorker.com/culture/culture-desk/the-pointlessness-of-unplugging, you cannot get away from it. While it used to only be used at an institutional level, such as at your bank watching your credit card transactions for fraud, you now carry it around with you in your phone, when you drive in your car, and when you check out at the store. When you go to the doctor, some of the information they may use is generated by some form of AI. And this is all good for us. It is good because AI often identifies issues that elude humans – with greater accuracy. We like that part.
But because of this, we are also giving software increasing control over decisions in our lives. Algorithms make huge financial decisions, auto-pilot our airplanes, and so on. What’s more is that we’re starting to put ourselves into a corner for needing them. For example, surveillance drones are pushing more and more video data to its human users that they cannot process it all. Soon, AI observers will be pointing out the items of interest and classifying behavior.
Secondly, exacerbating the situation, popular media is currently trending fearful and sensational statements by successful, popular icons of technology. It seems that every opinion factory on the Internet is touting someone’s fear of AI. Here’s an article from the Washington Post where Steve Wozniak (co-founder of Apple) talks about a scary future with AI. Similar coverage may be found at Business Week, CNN, and Computer World. Elon Musk recently funded $10M to Future of Life Institute to help keep AI from “turning evil”. (Actually, I can’t find anyplace that Mr. Musk said they could “turn evil”; I suspect that’s just a sensationalization for the headline.)
I want to explore this fear of AI and look at measured responses to those fears. Some of the questions we should answer are:
What are the popular understanding of the dangers of AI?
What is the [academic] community’s understanding of the dangers of AI, and how do they confirm or assuage the popular understanding?
What is the response by the community to address these dangers?
References [ + ]
|1.||↑||point: http://www.becomingminimalist.com/unplug-please/ and counterpoint: http://www.newyorker.com/culture/culture-desk/the-pointlessness-of-unplugging|