If there is a sexier branch of computer science than artificial intelligence (AI) and machine learning, it’s still hidden away in a lab somewhere and only a handful of people are working on it. Many big tech and industrial firms are investing heavily in AI research that eventually will turn into better ways of doing things while at the same time making a profit.
While that research into AI is promising in fields as diverse as automobiles and health care, there are also dangers attached to AI that need to be taken into account as development proceeds.
A new report from researchers at several academic institutions and private entities surveys potential threats from malicious use of AI technologies and proposes ways to improve methods of forecasting, preventing and mitigating these threats.
The report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” identifies three types of threats that researchers expect to proliferate as AI systems become more powerful and widespread:
Expansion of existing threats.
The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, intelligence and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets.
Introduction of new threats.
New attacks may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans. In addition, malicious actors may exploit the vulnerabilities of AI systems deployed by defenders.
Change to the typical character of threats.
We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems.
The report notes that “nearly all” AI researchers expect AI systems to outperform humans on a variety of tasks within the next half-century. AI is a dual-use technology; that is, it can do both good and harm almost equally, much like people do:
For example, systems that examine software for vulnerabilities have both offensive and defensive applications, and the difference between the capabilities of an autonomous drone used to deliver packages and the capabilities of an autonomous drone used to deliver explosives need not be very great.
The report illustrates the expansion of an existing threat using spear phishing as an example of how AI can replace much of the human power currently needed to make these attacks successful. We should expect more attacks of this kind as they become easier and cheaper to carry out.
Introducing new threats also gets easier:
For example, the use of self-driving cars creates an opportunity for attacks that cause crashes by presenting the cars with adversarial examples. An image of a stop sign with a few pixels changed in specific ways, which humans would easily recognize as still being an image of a stop sign, might nevertheless be misclassified as something else entirely by an AI system.
The report includes several scenarios that describe what appear to be rather benign human-machine interactions that could turn sour — or even deadly — in a relative heartbeat. And while malicious use of AI is still primarily conducted in a laboratory by “white hat” researchers, a survey of attendees at a recent Black Hat conference found that 62% believe AI will be used in a malicious attack within the next 12 months.