Fighting A.I. With A.I.

As we increasingly rely on artificial intelligence, we must prepare for new types of hacks.

Credit: Shutterstock

We already rely heavily on internet-connected devices to get us through the day, and they’re increasingly becoming artificially intelligent. Digital assistants such as Amazon’s Alexa are growing in popularity, self-driving cars are rolling onto the horizon, and the military is testing drones that could operate without human supervision. The big question is, how safe are these things from hackers?

We dug into that question on Science Friday last week, with a panel of experts who were part of a recent debate at Arizona State University about the future of A.I. Here are a few takeaways from our conversation:

 

There’s already widespread vulnerability among our connected devices.

Take cars, for instance.

“Our cars and other Internet-of-Things kinds of devices are actually pretty hackable at the moment,” said Kathleen Fisher, a professor of computer science at Tufts University.

For instance, “there are about five different ways hackers can reach a car from outside through the Bluetooth interface that lets you talk to the car without your hands, or through the telematics unit that arranges to call for an ambulance if you’re in an accident,” Fisher said. “And once a hacker has broken in, they can change the software that controls how the cars brake, or accelerate, or steer, or do lots of other functionality.”

“Basically, cars are networked computers on wheels, and are quite vulnerable to remote attack,” she said.

Medical devices are also susceptible.

For example, computer security expert Jerome Radcliffe showed a few years ago “that he could wirelessly hack his insulin pump to cause it to deliver incorrect dosages of medication,” added Fisher.

Electronic voting systems aren’t immune, either.

“I think an interesting thing is that, the more you know about computers, the less comfortable you are with electronic voting,” said Fisher. “A voting machine is going to count according to the latest version of software that’s on it. And that could be the correct version of the software that was installed by the people running the election, but it could be that it was hacked by somebody, or somebody sticks in a thumb drive and changes the code.” And then it counts according to the new code.

Why are our devices so susceptible?

For one, Internet of Things products are often built without security concerns in mind, according to Fisher. “People are excited about the functionality and the services that these new devices are providing and rush to get products out there,” she said. “And we don’t have a very good way of ensuring that these devices are built to good security standards. It’s very hard to measure security, and it’s very hard to monetize security.” Who pays for security, and whether or not consumers would pay a premium for secure devices “is really unclear,” said Fisher.”

What’s more, “getting really good security is a super hard problem. There are many, many dimensions that you have to get right if you want the system to be secure,” said Fisher.

Think of your house, for instance. “To make your house secure, it’s not enough to lock the front door. You have to lock the front door and the back door. And that’s not enough. You have to lock all the windows,” said Fisher.

It’s the same thing in a computer. “You have to lock every entry way. You have to make sure that your users are well-trained, that they don’t give away the password to somebody who calls on the phone and is convincing that they’re the IT department and need the password. You have to have correct hardware,” said Fisher. “There’s just many, many things that you have to get right. And if you don’t get them all right, then there’s a vulnerability that a hacker can break into.”



What sorts of A.I. attacks might we expect?

“When we build systems that are smarter, that can do more for us using machine vision, machine learning, to understand situations,” said Subbarao Kambhampati, a professor of Computer Science and engineering at Arizona State University. “We introduce what I refer to as ‘new attack surfaces’” — that is, new places for hacks to occur.

In so-called machine learning attacks, the hacker figures out a way to control the stream of data coming into a system and make the system do things that the designers did not intend.

For instance, last year a team of researchers demonstrated how feeding false inputs to an image recognition system — potentially the kind that might be used in self-driving cars — could cause it to misclassify a stop sign as a yield sign.

Eric Horvitz, the Managing Director of the Microsoft Research Lab’s main branch in Redmond, Washington, had a somewhat eerie prediction:  “I think we can all assume that, within a few years, A.I. systems and advanced graphics and rendering will be able to really spoof in high-fidelity ways your identity, even videos of you talking and saying things, videos of leaders.”

Is there such a thing as being “hack-proof”?

Unfortunately, “‘hack-proof’ is a concept that isn’t well-defined,” said Fisher. “You always have to talk about hacking with respect to a threat model and what kind of things can you imagine an attacker doing.”

The good news, though, is that “there is work afoot to think through best practices and potentially, one day, standards,” according to Horvitz. He said that a group called Partnership on A.I. to Benefit People and Society — formed by Amazon, Apple, Facebook, Google, DeepMind, IBM, Microsoft, and others — was established with that goal in mind.

What are some ways we can improve security?

As Fisher pointed out, we can’t rely on good old-fashioned human surveillance. “Humans won’t be fast enough to be able to meaningfully intervene in many cases. And in other cases, the timescales will be so long that the humans won’t be able to intervene,” she said. “So I think either timescale—too short or too fast—means that people won’t really meaningfully be able to monitor.”

One option, then, could be using A.I. to fight A.I. hacks, according to Subbarao Kambhampati. “I think it’s going to be an arms race,” he said. “We are using these tools because they’re very useful. But everything that’s useful can also be hijacked. And you don’t give up on that — you basically make security and safety a maintenance goal. And you have to use the same technology to work towards ensuring.”

Fisher warned, however, that using A.I. to combat hacks has its pitfalls. “The problem is that then you’re limited by the scope of the artificial intelligence system — how you built it, how you trained it, what you anticipated, and what you built into it, or what it was able to infer. The sort of unknown unknowns are an ongoing challenge.”

These quotes have been lightly edited for clarity. For a full transcript, click here and scroll down to “transcript.”

Meet the Writers

About Julie Leibach

Julie Leibach is a freelance science journalist and the former managing editor of online content for Science Friday.

About D. Peterschmidt

D. Peterschmidt is a producer, host of the podcast Universe of Art, and composes music for Science Friday’s podcasts. Their D&D character is a clumsy bard named Chip Chap Chopman.

Explore More