The real and present danger of Artificial Intelligence

Jerry Waters, Contributor

Growing up, I saved all summer for a Commodore 64 computer and learned BASIC so I could program it to randomly generate numbers for my characters in Dungeons and Dragons. Now, mentioning the Commodore 64 is a quaint anachronism as we have the modern capability to talk to our cell phone supercomputers and they can pinpoint our location anywhere on earth.  Our technologies are growing exponentially and average people are having a hard time keeping up and understanding the myriad of complex consequences.

Recently, Elon Musk, founder of Tesla and SpaceX, gave a dire warning about the potential dangers of Artificial Intelligence (AI) because he believes that the technology is expanding faster than we can regulate it or fathom the potential dangers.

I may date myself here, but Hollywood tried to warn us with films like 2001: A Space Odyssey and War Games. We’ve also had similar ethical debates regarding genetic engineering and cloning. In fact, I just read somewhere that there’s a doctor ready to perform the world’s first human head transplant. To be honest, aside from it being existentially creepy, I’m not real sure how I feel about that and I haven’t even begun to untangle the messy legal and ethical implications involved in such a procedure.

One thing I know is that we can open a Pandora’s box unintentionally if we don’t establish regulations and processes based on sound ethics and solid legal principles. I also know that it’s worth the investment to enforce these regulations and processes or we’ll be watching the reality television version of Frankenstein.

AI has the potential to help humans beyond their wildest dreams or to become the nightmare from which we never wake. I’ve used Siri in my iPhone to recommend restaurants and to snarkily ask, “What did the fox say?” Obviously, the technology is in its infancy. What happens when terrorists decide to use it create a computer super virus? What happens if AI designers develop a system that becomes self-aware and determines that mankind is a threat to its survival?

How much should AI be able to interface with other technologies and why? Is someone planning on a human brain/ AI hybrid?

We know what can happen if we don’t take the time to answer these questions. Aren’t we still dealing with the consequences of the proliferation of nuclear weapons? When you get right down to the crux of the matter, aren’t most of our current human conflicts over who controls the fossil fuels that power our current technologies?

Elon Musk is adamant that we do something now because “by the time we are reactive in AI regulation, it’s too late”.  It’s noteworthy that other leading human intelligence echoes his concerns, folks like Stephen Hawking and Bill Gates. The big questions they ponder are: 1) Who controls the technology to determine its impact?,  2) How do we prevent them from becoming completely autonomous?, and 3) How do we prevent them from fighting humanity?

Certainly, it’s easy to dismiss all of this as hyped-up fear, but the technology exists right now. With all the hacking that’s been dominating our headlines, how do we know this information will stay secure and not fall into the hands of some James Bond archetypal villain?

As Alex Morritt put it, “Whoever perceives that robots and artificial intelligence are merely here to serve humanity, think again. With virtual domestic assistants and driverless cars just the latest in a growing list of applications, it is we humans who risk becoming dumbed down and ultimately subservient to machines.”

After this last disaster of a presidential election, it’s hard to argue against that point.  In the meantime, I’m going to use the app on my phone to check my refrigerator live-feed video to see if I need to buy more milk while I’m out.

Leave a Reply