Why The World’s Greatest Minds Think Artificial Intelligence Is Humanity’s Greatest Threat

Published November 10, 2015

Artificial Intelligence Needs To Be Regulated, But How?

Google Self Driving Car

Self-driving cars are already on the streets of California, now people need to figure out how to regulate them. Image Source:
Wearobo

The Future Of Life Institute and its backers have made their point about the potential dangers of AI, and high profile names have helped bring it into the mainstream. Actually regulating AI to keep it from being able to take over humanity, on the other hand, carries an entirely different set of problems. For example, one type of AI in need of immediate regulation is self-driving cars, as a simple confusion in programming could make a self-driving car uncontrollable. If a child runs in front of a self-driving car, and the car is programmed to protect human life, does it swerve away from the child and put the passengers in danger, or does the car hit the child and protect the passengers? A person driving the car would make the decision, but who is to blame in the case of a self-driving car?

It doesn’t help that legislators in the U.S. don’t have the best track record when it comes to updating laws to keep up with technology—it took lawmakers until 1938 to address unlivable wages, working hours and even child labor following the Industrial Revolution a century earlier. Playing catch-up seems to be the go-to strategy, but if people are supposed to take Bill Gates seriously when he says he is “in the camp that is concerned about super intelligence,” playing catch-up to technology that poses a serious threat to humanity is like regulating nuclear weapons after a nuclear war.

The first step to effectively regulating AI is effectively defining what it is that needs to be regulated. That means selecting one definition out of the four most popular current definitions:

Campaign To Stop Killer Robots

A protest, staged by Future of Life Institute, campaigned to stop killer robots. Image Source: Empire State Tribune

1. Machines that think like a human – or machines that have similar thought processes as humans. Chess-playing computers don’t think about chess in the same way that a human does, so this definition doesn’t encompass all of what people think of as AI. Algorithm processing has proven capable of taking down the best human chess players, however, so AI that can outsmart humans without thinking like a human could prove to be the biggest risk.

2. Machines that act like a human – or machines with similar behaviors as humans. Human behavior itself isn’t fully understood, but acting like a human involves having emotions. Transferring the complexity of emotions into a computer program is, at the moment, impossible, but the dangers of such a feat are pretty obvious.

3. Machines that think rationally – or machines that have goals and the capacity to develop ways to achieve these goals. Unlike the previous two definitions that are very specific, this definition is at fault for being too broad since, essentially, all machines act in a goal-oriented manner.

4. Machines that act rationally – or machines that only act in ways that move them toward completing goals. This definition is also too broad—a bottle-capping machine in a Budweiser factory has the sole goal of putting caps onto beer bottles, but just because the only action it takes moves it towards its goal, does that make it intelligent?

The future regulation of AI relies on a comprehensive definition, but it also depends on how far we think scientists are able to take technology.

author
Nickolaus Hines
author
Nickolaus Hines graduated with a Bachelor's in journalism from Auburn University, and his writing has appeared in Men's Journal, Inverse, and VinePair.
editor
John Kuroski
editor
John Kuroski is the editorial director of All That's Interesting. He graduated from New York University with a degree in history, earning a place in the Phi Alpha Theta honor society for history students. An editor at All That's Interesting since 2015, his areas of interest include modern history and true crime.
Citation copied
COPY
Cite This Article
Hines, Nickolaus. "Why The World’s Greatest Minds Think Artificial Intelligence Is Humanity’s Greatest Threat." AllThatsInteresting.com, November 10, 2015, https://allthatsinteresting.com/artificial-intelligence-threat. Accessed May 5, 2024.