The potential risks and dangers of artificial intelligence

Find out more about the inspiring people coming to the meetingincluding: In the last few years, several high-profile voices, from Stephen Hawking to Elon Musk and Bill Gates have warned that we should be more concerned about possible dangerous outcomes of supersmart AI. Musk is among several billionaire backers of OpenAI, an orgnisation dedicated to developing AI that will benefit humanity. But for many, such fears are overblown.

The potential risks and dangers of artificial intelligence

The potential risks and dangers of artificial intelligence

Jan 22, Benefits and Risks of Artificial Intelligence Discussions about Artificial Intelligence AI have jumped into the public eye over the past year, with several luminaries speaking publicly about the threat of AI to the future of humanity.

Over the last several decades, AI — computing methods for automated perception, learning, understanding, and reasoning — have become commonplace in our lives. We plan trips using GPS systems that rely on AI to cut through the complexity of millions of routes to find the best one to take.

Our smartphones understand our speech, and Siri, Cortana, and Google Now are getting better at understanding our intentions. AI algorithms detect faces as we take pictures with our phones and recognize the faces of individual people when we post those pictures to Facebook.

Internet search engines, such as Google and Bing, rely on a fabric of AI subsystems.

The potential risks and dangers of artificial intelligence

On any day, AI provides hundreds of millions of people with search results, traffic predictions, and recommendations about books and movies. Several companies, such as Google, BMW, and Tesla, are working on cars that can drive themselves — either with partial human oversight or entirely autonomously.

Beyond the influences in our daily lives, AI techniques are playing a major role in science and medicine. AI is at work in hospitals helping physicians understand which patients are at highest risk for complications, and AI algorithms are helping to find important needles in massive data haystacks.

For example, AI methods have been employed recently to discover subtle interactions between medications that put patients at risk for serious side effects. The growth of the effectiveness and ubiquity of AI methods has also stimulated thinking about the potential risks associated with advances of AI.

Follow BBC Future

The mission of the Association for the Advancement of Artificial Intelligence is two-fold: The AAAI considers the potential risks of AI technology to be an important arena for investment, reflection, and activity.

One set of risks stems from programming errors in AI software. We are all familiar with errors in ordinary software. For example, apps on our smartphones sometimes crash.

Major software projects, such as HealthCare. Gov, are sometimes riddled with bugs. Moving beyond nuisances and delays, some software errors have been linked to extremely costly outcomes and deaths.

However, the growing complexity of AI systems and their enlistment in high-stakes roles, such as controlling automobiles, surgical robots, and weapons systems, means that we must redouble our efforts in software quality.

There is reason for optimism. Many non-AI software systems have been developed and validated to achieve high degrees of quality assurance. For example, the software in autopilot systems and spacecraft systems is carefully tested and validated. Similar practices must be developed and applied to AI systems.

Another challenge is to ensure good behavior when an AI system encounters unforeseen situations. Our automated vehicles, home robots, and intelligent cloud services must perform well even when they receive surprising or confusing inputs.

A second set of risks is cyberattacks: AI algorithms are no different from other software in terms of their vulnerability to cyberattack.

But because AI algorithms are being asked to make high-stakes decisions, such as driving cars and controlling robots, the impact of successful cyberattacks on AI systems could be much more devastating than attacks in the past. US Government funding agencies and corporations are supporting a wide range of cybersecurity research projects, and artificial intelligence techniques in themselves will provide novel methods for detecting and defending against cyberattacks.

Before we put AI algorithms in control of high-stakes decisions, we must be much more confident that these systems can survive large scale cyberattacks. Troubling scenarios of this form have appeared recently in the press. Other fears center on the prospect of out-of-control superintelligences that threaten the survival of humanity.

All of these examples refer to cases where humans have failed to correctly instruct the AI algorithm in how it should behave. This is not a new problem. An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands in a literal manner.

It should also be continuously monitoring itself to detect abnormal internal behaviors, which might signal bugs, cyberattacks, or failures in its understanding of its actions. In addition to relying on internal mechanisms to ensure proper behavior, AI systems need to have the capability — and responsibility — of working with people to obtain feedback and guidance.The AAAI considers the potential risks of AI technology to be an important arena for investment, reflection, and activity.

One set of risks stems from programming errors in AI software. According to a recent article in kaja-net.com, artificial intelligence (AI) will redesign health care with unimaginable kaja-net.com author sees great benefits, and so do I, but he dispels the risks – risks that visionaries like Bill Gates, Elon Musk, and Stephen Hawking warn against.

Now, Stephen Hawking, Elon Musk and dozens of other top scientists and technology leaders have signed a letter warning of the potential dangers of developing artificial intelligence (AI).

A “narrower” artificial intelligence might, for example, simply analyze scientific papers and propose further experiments, without having intelligence in other domains such as strategic planning, social influence, cybersecurity, etc.

Narrower artificial intelligence might change the world significantly, to the point where the nature of the risks change dramatically from the current picture, before fully general artificial .

Mar 21,  · Exploring the risks of artificial intelligence. I asked what they considered to be the most likely risk of artificial intelligence in the next 20 years. engineering potential solutions and.

According to a recent article in kaja-net.com, artificial intelligence (AI) will redesign health care with unimaginable potential. The author sees great benefits, and so do I, but he dispels the risks – risks that visionaries like Bill Gates, Elon Musk, and Stephen Hawking warn against.

Existential risk from artificial general intelligence - Wikipedia